$1,000 Fullstack Live Event Free With a Membership!
Join us for an in-depth event where you’ll explore the best practices for optimizing LLMs, implementing MLOps pipelines, and securing GenAI applications. Dive into hands-on techniques for addressing AI security risks, fine-tuning LLMs for multitasking, and ensuring AI ethics and privacy with confidential computing.
In this session our experts will share hands-on insights into building and running a complete MLOps pipeline—from data preparation to model training, deployment, and monitoring.
Learn how integrating a 3D architecture can boost efficiency and scalability, and discover key challenges, smart solutions, and best practices for bringing ML pipelines into production successfully.
GenAI often looks impressive – but hidden risks such as bias, data breaches and ethical pitfalls lurk beneath the surface. This session will show you what to look out for to recognize and avoid such problems early on.
Get a clear view of the hidden challenges of generative AI – and learn how to develop safe, responsible and future-proof GenAI solutions.
Ready to look beneath the surface?
Learn how to run large language models optimally, even on single GPUs. This session will show you proven strategies for increasing efficiency, using specialized agents and fine-tuning for a variety of tasks.
We will also look at factors such as data quality, model architecture and external dependencies – crucial for strong LLM results despite limited resources.
Learn how you can use scalable continuous training pipelines to ensure the quality of your ML models, even with large amounts of data and complex models.
We will show you how frameworks such as Apache Spark and Kafka process data efficiently, how automatic resource scaling reduces costs and how versioning ensures control. You will also learn how monitoring and Infrastructure as Code (IaC) make your pipelines reliable and maintainable.
Ideal for anyone who wants to bring AI projects into production sustainably and efficiently.
Learn how Confidential Computing technologies such as Intel TDX and SGX protect sensitive data and AI models from unauthorized access – especially in regulated industries such as healthcare and finance.
This session shows how Trusted Execution Environments (TEEs) help to ensure data protection and compliance (EU AI Act, GDPR), paving the way for trustworthy, privacy-oriented AI applications.
Eric Joachim Liese
Expert in MLOps, AI strategies, data lakes and the automation of AI and data processes in the cloud.
René Brunner
Expert in Python programming, data science and big data analytics.
Maish Saidel-Keesing
Expert for AWS Cloud, container technologies, DevOps, automation and open source.
Vanessa Lopes
Expert in generative AI, machine learning and scalable production systems in the financial and supervisory sector
Arya Soni
Tech Entrepreneur & Cloud Enthusiast
Yatindra Shashi
Expert in artificial intelligence, confidential computing, cloud-native technologies, networks and IT security