Return to Articles

Spotlight on Sean Hughes of ServiceNow

Member spotlight
Sean Hughes, AI Ecosystem Director, ServiceNow
  • Name: Sean Hughes
  • Title: AI Ecosystem Director, ServiceNow
  • A fun fact about you, your favorite quote, or piece of advice you’d like to share: “Do or do not, there is no try!”; Yoda, Star Wars: Episode V - The Empire Strikes Back (1980)

Tell us about yourself and what you are responsible for at your company?

As the AI Ecosystem Director at ServiceNow Research, I support our open scientific collaboration efforts in the external AI community. I am a part of the core team that started and co-leads the BigCode Project, a collaboration with AI/ML research scientists from around the world stewarded by Hugging Face and ServiceNow Research towards the responsible development of large language models for code. In that collaboration we have since released The Stack dataset, and the StarCoder family of models. With ServiceNow being a founding member of the AI Alliance, I have since taken on the additional responsibilities of co-leading the Foundation Models working group, as well as the overarching AI Community efforts. I also support the ServiceNow Research team with open sourcing and driving awareness of our state-of-the-art research projects.

Why did you and/or your organization join AI Alliance?

BigCode demonstrated that open innovation can drive real world impact for ServiceNow and our customers through massive adoption and use of BigCode datasets and models. ServiceNow finetuned the StarCoder models to create new code generation, workflow generation, and even a text summarization suite of ServiceNow LLMs that have demonstrated tremendous value for ServiceNow customers. Through the AI Alliance consortium, we aim to build off the best practices and lessons learned from BigCode to continue our open scientific collaborations at a much larger scale and with the potential for even more impact.

What’s a project you’re working on right now? Why is it meaningful to you?

Right now, we are going through the ideation phase for a number of AI Alliance Core Projects, spanning fields such as datasets, benchmarks, time-series, model training frameworks and more. ServiceNow Research and IBM have recently had GEO-Bench-2 accepted as a Core Project to run with the AI Alliance – we are in the process of initializing that project. GEO-Bench-2 has the potential to help advance the state-of-the-art in the fields of geospatial earth observation, as well as downstream potential to help advance efforts to build more robust multi-modal models in support of enterprise AI.

How does AI Alliance mission align with your organizations core values?

AI innovation must remain open to drive positive and equitable societal impact, foster continued progress, and address potential risks collaboratively. There is no room for a winner-take-all approach; the development of responsible, secure LLMs comes in many forms. By joining this alliance, ServiceNow is doubling down on open innovation, harnessing the collective talent and expertise across the global open-source community. This move aims to ensure AI development is grounded in the needs of society, both today and in the future.

What about an open approach to AI innovation inspires you?

For me, open scientific collaboration and open-source AI provides fuel to the virtuous cycle of open innovation. Being on the core team of BigCode for the last two years gave me a front row seat to the power of community working together to achieve more than what any single organization can achieve by themselves, if at all. Tackling wicked problems and supporting open-source AI together with the AI Alliance gives us the opportunity to make meaningful contributions back to the AI community. These projects are very fulfilling, and I am thankful for the opportunities to participate in the responsible development of generative AI.

What's next on the horizon for your organization?

ServiceNow continues to advance the state of the art in enterprise AI. We have researchers committed to fundamental and applied research in areas such as conversational agents, human decision support, foundation models, safety and security, AI trust and governance, emerging capabilities, research acceleration, and more.

Where might we see you next and how can the AI Alliance community get in touch with you?

In addition to AI Alliance community events, I am looking forward to meeting with the AI research community at NeurIPS ’24. Anyone with a common interest and shared vision is encouraged to join the AI Alliance and to make impactful contributions back to the open source AI community. You can find me actively participating in various working groups and can connect via the AI Alliance collaboration channels.

Related Articles

View All

How Can We Test Enterprise AI Applications?

The AI Alliance’s Trust and Safety Focus Area has released version V0.2.0 of the “Achieving Confidence in Enterprise AI Applications” guide, addressing one of the biggest challenges in enterprise adoption of generative AI: how to test probabilistic systems. Traditional enterprise developers are accustomed to deterministic testing, but AI introduces new complexities. The living guide bridges this gap by adapting benchmark techniques into unit, integration, and acceptance benchmarks for AI applications. It shows how to leverage LLMs to generate and validate datasets, reduce randomness in application design, and identify AI “features” that can be developed incrementally in agile workflows. A practical healthcare chatbot example demonstrates how FAQs can be handled deterministically while still using LLMs for flexible input interpretation, balancing trust, safety, and innovation. This release marks a step forward in helping developers confidently design, test, and deploy enterprise-grade AI systems, while inviting broader collaboration from the community.

Building a Deep Research Agent Using MCP-Agent

This article by Sarmad Qadri documents the journey of building a Deep Research Agent with MCP-Agent, highlighting the evolution from an initial Orchestrator design, to an over-engineered Adaptive Workflow, and finally to the streamlined Deep Orchestrator. The author emphasizes that “MCP is all you need,” showing how connecting LLMs to MCP servers with simple design patterns enables agents to perform complex, multi-step research tasks. Key lessons include the importance of simplicity over complexity, leveraging deterministic code-based verification alongside LLM reasoning, external memory for efficiency, and structured prompting for clarity. The resulting Deep Orchestrator balances performance, scalability, and adaptability, proving effective across domains like finance research. Future directions include remote execution, intelligent tool and model selection, and treating memory/knowledge as MCP resources. The open-source project, available on GitHub, offers developers a powerful foundation for creating general-purpose AI research agents.

The AI Alliance Forms Non-profit AI Lab and AI Technology & Advocacy Association to Scale Open-Source Innovation 

New legal entities and boards intend to scale the AI Alliance’s mission to support and perform open-source development, open research, education, and advocacy for AI globally.