Return to Articles

Spotlight on Raphaël Vienne of datacraft

Member spotlight
  • Full name: Raphaël Vienne
  • Title: Head of AI, datacraft
  • How long has your company been a member of the AI Alliance: 2 months
  • A fun fact about you: I have such a big appetite that my mother once thought I had a tapeworm.

Tell us about yourself and what you are responsible for at your company?

I’m Raphaël Vienne, Head of AI at datacraft. I have an engineering background and previously did NLP research for the medical domain.

In my personal life, I generally gravitate around sports and work, and when I have some free time, I either work on personal projects/learning or connect with like-minded people.

At datacraft, I’m responsible for several things:

  • our whole program of technical events: ideally twice per week
  • our thought leadership through technical content, education and popularization
  • technical aspect of our innovative services, such as our treasure hunt hackathons, or our outdoor escape GenAI escape game in Paris
  • partnerships: AIA and some others

Why did you and/or your organization join AI Alliance?

We think that openness and trust can only be reached at scale, when everyone is under the same roof, and hope for a future where AI is democratized to anyone.

The AI Alliance’s stance on that topic resonated with us and we’re proud to be part of this journey.

What’s a project you’re working on right now? Why is it meaningful to you?

I’m currently helping four of our affiliated companies participate in the ENS Data Challenge (french academic kaggle). This is an annual machine learning competition where companies open-source datasets and let students hack a machine learning use-case for one year.

This project is very important to me for two reasons:

  • I get to know more about our members’ activities and use-cases, reinforcing our partnership.
  • It’s a first step towards companies seeing the open-sourcing of some of their data as casual, and reaping the benefits of such a commitment.

How does AI Alliance mission align with your organizations core values?

We strongly believe in openness and responsible development of AI technologies. Our technical content has always and will always be open-source when applicable so there was a clear match.

We also have a stream of events on Trustable AI that I take care of and we often challenge our affiliated companies on openness.

datacraft is about sharing and learning from your peers, a kind of openness that is rooted in the AI Alliance core values.

What about an open approach to AI innovation inspires you?

This approach seems very positive to me, because it’s a journey where you get to meet brilliant like-minded individuals, and a unique journey where you get to collaborate at a scale never seen before.

What’s next on the horizon for your organization?

Our main objective is to keep growing: maintaining a good pace of events, making them more and more relevant to people. We keep on developing new innovative services for everyone to get involved in AI, whatever their background.

We also plan to open new clubs abroad: the next one will be in Morocco, and we’re considering options like San Francisco, Berlin or Asia for the third one.

Where might we see you next and how can the AI Alliance community get in touch with you?

You might see me at any AI Alliance meetup in Europe anytime soon and you can get in touch with me at raphael dot vienne at datacraft dot paris or via the Alliance slack, feel free to send me a message!

Related Articles

View All

How Can We Test Enterprise AI Applications?

The AI Alliance’s Trust and Safety Focus Area has released version V0.2.0 of the “Achieving Confidence in Enterprise AI Applications” guide, addressing one of the biggest challenges in enterprise adoption of generative AI: how to test probabilistic systems. Traditional enterprise developers are accustomed to deterministic testing, but AI introduces new complexities. The living guide bridges this gap by adapting benchmark techniques into unit, integration, and acceptance benchmarks for AI applications. It shows how to leverage LLMs to generate and validate datasets, reduce randomness in application design, and identify AI “features” that can be developed incrementally in agile workflows. A practical healthcare chatbot example demonstrates how FAQs can be handled deterministically while still using LLMs for flexible input interpretation, balancing trust, safety, and innovation. This release marks a step forward in helping developers confidently design, test, and deploy enterprise-grade AI systems, while inviting broader collaboration from the community.

Building a Deep Research Agent Using MCP-Agent

This article by Sarmad Qadri documents the journey of building a Deep Research Agent with MCP-Agent, highlighting the evolution from an initial Orchestrator design, to an over-engineered Adaptive Workflow, and finally to the streamlined Deep Orchestrator. The author emphasizes that “MCP is all you need,” showing how connecting LLMs to MCP servers with simple design patterns enables agents to perform complex, multi-step research tasks. Key lessons include the importance of simplicity over complexity, leveraging deterministic code-based verification alongside LLM reasoning, external memory for efficiency, and structured prompting for clarity. The resulting Deep Orchestrator balances performance, scalability, and adaptability, proving effective across domains like finance research. Future directions include remote execution, intelligent tool and model selection, and treating memory/knowledge as MCP resources. The open-source project, available on GitHub, offers developers a powerful foundation for creating general-purpose AI research agents.

Defining Open Source AI: The Road Ahead

News

Open source and open science in AI is a practical, proven approach to enabling access, innovation, trust, and value creation now. Let’s focus on that as we better define it.