November 26th 2025 • fully remote • free registration

Agentic AI in Action

This isn’t just a competition — it’s a crowdsourced research project exploring how agentic AI can solve real-world business challenges.

Let's solve real business usecases with agentic AI

The Enterprise RAG Challenge (ERC) returns — and this time, we’re diving into the world of Agentic AI.
In the third edition of ERC, we will build autonomous AI agents that can operate inside a simulated enterprise environment — reasoning, planning, and acting to solve real-world business tasks.

Participants look forward to

Community Exchange

Connect with innovators, share ideas, and collaborate on real-world AI challenges.

 Prizes up to €500

Win vouchers up to 500€ (voucher) for the best agentic AI solution

Keynote by Eric Evans

Gain insights from Domain-Driven Design expert Eric Evans on bridging AI and enterprise domains

Sty tuned! More information coming soon!

The Challenge

Participants will design and develop AI agents capable of performing complex tasks in a dynamic company simulation — the Agentic Enterprise Simulation (AGES).


AGES provides a realistic enterprise API with data such as:

  • Employees and their skills
  • Calendars and availability
  • Office locations and departments
  • Ongoing projects and tasks

Resources provided

Details about the Agentic Enterprise Simulation (AGES):

  • It is a discrete-event simulation, similar to our Logistic Simulation of Europe.
  • Teams will get access to an API that provides a list of human-readable tasks and allows to start working on each one. Agent will need to pull the next task, and then use simulated APIs to carry it out.
  • APIs will be published in advance, along with the documentation and a public spec.

We will host a dry-run workshop (roughly 2 weeks before the competition) to demonstrate use of these APIs, answer any questions and try building a simple agent in public together. This will be announced both on discord and in the email communication.

FAQ

Which technologies / frameworks are allowed?
You can use any frameworks and technologies. Anything that is capable of calling remote APIs (to retrieve the next task and then to complete the assignment) - works.
What criteria will be used to judge submissions?

Judging will follow the same approach as in previous rounds of the Enterprise RAG Challenge. All submissions will be ranked based on accuracy, using predefined ground truth answers for each task. After the competition, all tasks, submissions and ground truth answers will be shared publicly.

Will the submissions be made public?
We are going to rank all submissions across multiple leaderboards, including global leaderboard and a leaderboard for the submissions that used only locally-deployable models.

Who Should Participate

The challenge is designed for:

  • Developers and data scientists working with RAG or large language models,
  • Enterprises aiming to test AI applications with high reliability and explainability,
  • Researchers and students exploring the frontiers of agentic AI and reasoning.

Register for free