Student Developer focused on AI-driven applications and emerging technologies
$1.200 Fullstack Course Free With a Membership!
Artificial intelligence is evolving at an unprecedented pace. New models, frameworks, and buzzwords appear almost daily. Yet while technologies change, the core principles of successful AI systems remain the same: sound architecture, deep context understanding, and thoughtful strategic decisions.
This course distills the most valuable insights from leading MLcon sessions into a long-lasting knowledge foundation for developers and AI professionals. Instead of a year-specific recap, you gain a clear understanding of the concepts, patterns, and mental models that will stay relevant in 2026 and beyond.
Modern AI is about more than experimenting with the latest models. Real impact emerges when LLMs, agents, data, architecture, and governance work together as a cohesive system. In this course, you’ll learn:
why many AI initiatives fail despite powerful models
how context, evaluation, and system design shape AI quality
the role of embeddings, RAG, agents, and protocols in scalable solutions
how to design AI systems that are robust, explainable, and maintainable
The content is intentionally model-agnostic, helping you make better decisions regardless of whether you work with GPT-4, GPT-5, or open-source alternatives.
OpenAI’s language models excel at generating fluent text, but their impact is magnified when they produce outputs in a predetermined JSON structure. In this session, Rainer Stropek will explore the capabilities of Structured Outputs-techniques that ensure AI responses adhere to developer-defined schemas. Attendees will gain insights into how structured output enabling robust, production-ready integrations of OpenAI into custom application. Practical examples will illustrate how structured outputs can enhance reliability and reduce post-processing efforts. A basic understanding of OpenAI APIs is recommended to fully appreciate the applications discussed in this talk.
Discover how to achieve a significant improvement in answer quality in Retrieval Augmented Generation (RAG) systems. Common RAG solutions are known to combine the strengths of retrieval-based methods and generative models to use the power of generative AI with company data. However, many of them fail to deliver the required accuracy and relevance of generated text as they consist of standard components and architecture. By leveraging advanced techniques like knowledge graphs, you can greatly surpass basic RAG implementations. This approach not only enhances performance but also makes the specific quality of the LLM less critical and reduces hardware requirements, optimizing both results and resource utilization.
This talk provides a practical, step-by-step guide to building AI agents using only Python and an LLM API. Starting from a basic API call, we’ll progressively add capabilities: prompt engineering for consistent outputs, implementing memory and context management, adding tool use and function calling, handling errors and retries, and implementing basic reasoning loops. Along the way, we’ll identify common pitfalls including context window limitations, hallucination in tool use, infinite loops, and error cascades. Each implementation choice will be explained with working code examples. By the end, attendees will understand the core components of AI agents and have the knowledge to build their own agents for real-world tasks without relying on frameworks or abstractions.
Agentic systems are emerging as the next paradigm in AI application design – moving beyond static chatbots toward dynamic, context-aware, and modular ecosystems of intelligent agents. To operate effectively at scale, these systems must integrate three foundational capabilities: perceiving their environment, collaborating across specialized components, and interacting with users in real time. This talk introduces an architecture that leverages three interoperable protocols to achieve these goals. The Model Context Protocol (MCP) enables dynamic context hydration and semantic grounding, allowing agents to operate on structured and unstructured inputs tailored to specific tasks. The Agent-to-Agent Protocol (A2A) facilitates orchestrated collaboration between modular agents, enabling delegation, specialization, and distributed reasoning. The Agent-User Interaction Protocol (AG-UI) provides a real-time interface layer that closes the loop with users, supporting direct feedback, steerability, and reactive experiences. Together, these protocols form a cohesive foundation for building intelligent systems that are decoupled, composable, and resilient – minimizing integration debt while enabling rich, responsive behaviors. This session explores design patterns, practical challenges, and architectural strategies for applying MCP, A2A, and AG-UI in real-world agentic applications, offering a blueprint for anyone aiming to architect intelligence beyond chat.
AI has been around for many years, we’ve been using text to speech and speech to text for over a decade, Siri, Alexa etc., cameras could follow faces a decade ago. However, the major change was just two years ago when Generative Pre-trained Transformers emerged for public use, yes GPT. These are Large Language Models (LLMs) but just for us geeks, are the open source LLMs like Phi 3.5, Qwen 2.5, Llama 3.2 and Mistral, these are advancing so fast I fear even this abstract will be out of date. John will pull these LLMs apart to show the internal workings, demonstrate some cool features and help you better understand how they work, what they’re good at and what they’re not good at and why. From vocabulary, tokenisation and embeddings to attention heads, quantisation and performance. We’ll be running everything locally and will try some German in the LLMs too, simple code yet fascinating results. If you get change to download “ollama” (.com) and one or more of the models mentioned above on your laptop, please bring it along.

Create & innovate with Generative AI, LLMs & Machine Learning, at MLcon London. Turn theory into action and learn to build AI-powered intelligent systems from industry experts. Deep dive into Advanced ML & MLOps—from prototype to production. Book your tickets and join us next May 11 – 15, 2026!
AI Engineers & ML Practitioners who want to make their systems more robust, traceable and production-ready
Software developers who want to meaningfully integrate AI into existing architectures
Tech professionals and architects who need to understand AI not only from a technical but also from a strategic perspective.
To thoroughly evaluate and classify current AI and ML approaches
Develop more intelligent systems with a clear architecture and clean context structure
Better justify technical decisions regarding LLMs, evaluation and integration
Plan AI projects strategically instead of just implementing them experimentally.
Transfer insights from MLcon 2025 directly into your own work
Rainer Stropek
Founder & Managing Director at software architects
Melanie Bauer
Student Developer focused on AI-driven applications and emerging technologies
Peter Fuchs
AI-focused student developer and generative AI practitioner, Audience Award winner
Paul Dubs
CTO & Co-Founder at Xpress AI, expert in AI agents and large-scale intelligent systems
Sinda Khenine
Data Scientist specializing in predictive modeling, analytics, and data-driven business insights
Max Marschall
Consultant and conference speaker at Thinktecture AG
John Davies
AI entrepreneur and former global chief architect in finance, co-founder of Incept5
Marco Frodl
Principal Consultant for Generative AI at Thinktecture AG, specializing in LLM-based AI workflows
Rachel-Lee Nabors
Developer education leader and web standards expert, former React Team and W3C contributor
You’re all set! Grab a pen and paper and simply start your course. Browse through the complete list of courses here.