MLOps Community
+00:00 GMT

Collections

All Collections

Raise Summit AI Conversations powered by Prosus Group
8 Items

All Content

All
Kopal Garg
Kopal Garg · Nov 12th, 2025
Taming LLMs with NeMo Guardrails
LLMs can perform complex tasks like drafting contracts or answering medical questions, but without safeguards, they pose serious risks—like leaking PII, giving unauthorized advice, or enabling fraud. NVIDIA’s NeMo Guardrails provides a modular safety framework that enforces AI safety through configurable input and output guardrails, covering risks such as PII exposure, jailbreaks, legal liability, and regulatory violations. In high-stakes areas like healthcare, it blocks unauthorized diagnoses and ensures HIPAA/FDA compliance. Each blocked action includes explainable metadata for auditing and transparency, turning AI safety from a black-box filter into configurable, measurable infrastructure.
# LLMs
# NeMo Guardrails
# PII
# HIPAA/FDA
Andy Pernsteiner
Demetrios Brinkmann
Andy Pernsteiner & Demetrios Brinkmann · Nov 11th, 2025
Most AI projects don’t fail because of bad models; they fail because of bad data plumbing. Andy Pernsteiner joins the podcast to talk about what it actually takes to build production-grade AI systems that aren’t held together by brittle ETL scripts and data copies. He unpacks why unifying data - rather than moving it - is key to real-time, secure inference, and how event-driven, Kubernetes-native pipelines are reshaping the way developers build AI applications. It’s a conversation about cutting out the complexity, keeping data live, and building systems smart enough to keep up with your models.
# GPU Clusters
# Production-grade AI Systems
# VAST Data
Siddharth Bidasaria
Demetrios Brinkmann
Siddharth Bidasaria & Demetrios Brinkmann · Nov 5th, 2025
Demetrios Brinkmann talks with Siddharth Bidasaria about Anthropic’s Claude code — how it was built, key features like file tools and Spotify control, and the team’s lean, user-focused approach. They explore testing, subagents, and the future of agentic coding, plus how users are pushing its limits.
# Claude Code
# Agentic Coding
# Anthropic
Deploying AI agents in enterprises is complex, balancing security, scalability, and usability. This post compares deployment paths on Google Cloud—highlighting Cloud Run with IAP as the most secure and flexible option—and shows how teams can build powerful agents with ADK without losing the human touch.
# AI Agent
# Agentops
# Generative AI Tools
# Data Science
# Artificial Intelligence
Jaipal Singh Goud
Demetrios Brinkmann
Jaipal Singh Goud & Demetrios Brinkmann · Nov 3rd, 2025
How do fine-tuned models and RAG systems power personalized AI agents that learn, collaborate, and transform enterprise workflows? What kind of technical challenges do we need to first examine before this becomes real?
# AI Models
# Fine Tuning
# SLMs
Sophia Skowronski
David DeStefano
Valdimar Eggertsson
+1
Sophia Skowronski, David DeStefano, Valdimar Eggertsson & 1 more speaker · Oct 31st, 2025
As AI agents become more capable, their real-world performance increasingly depends on how well they can coordinate tools. This month's paper introduces a benchmark designed to rigorously test how AI agents handle multi-step tasks using the Model Context Protocol (MCP) — the emerging standard for tool integration. ​The authors present 101 carefully curated real-world queries, refined through iterative LLM rewriting and human review, that challenge models to coordinate multiple tools such as web search, file operations, mathematical reasoning, and data analysis.
# MCP
# AI Agents
# LLM Judge
Modern LLMs are defined as much by how they’re trained as by what they learn. This post unpacks the often-overlooked foundations of that process: pretraining—the stage that shapes a model’s core reasoning and knowledge. Starting with ULMFiT’s breakthrough in transfer learning and InstructGPT’s formalized multi-stage pipeline, it explores how pretraining has evolved into a dynamic ecosystem of techniques, from instruction-augmented and multi-phase approaches to continual and reinforcement-based pretraining. Amid the growing complexity and shifting definitions, one truth remains: understanding pretraining is essential to understanding how language models think, reason, and behave.
# Language Models
# LLMs
Charlie Cheesman
Marissa Liu
Ana Shevchenko
+1
Charlie Cheesman, Marissa Liu, Ana Shevchenko & 1 more speaker · Oct 23rd, 2025
Unicorn Mafia won the recent hackathon at Raise Summit and explained to me what they built, including all the tech they used under the hood to make their AI agents work.
# Hackathon
# Unicorn Mafia
# Raise Summit
# Yay.travel
When IT blocked every translation tool, Médéric Hurier decided not to wait. In just one lunch break, he built Slides-To-Translate — a fully automated Google Slides translator using Gemini 2.5 Flash, Colab, and Vertex AI — for only $0.04. His quick hack turned a bureaucratic bottleneck into a lightning-fast, secure, and reusable solution that proves anyone with a bit of code and curiosity can outpace corporate constraints.
# Generative AI Tools
# Data Sceince
# Programming
# Coding
# Hacking
Biswaroop Bhattacharjee
Demetrios Brinkmann
Biswaroop Bhattacharjee & Demetrios Brinkmann · Oct 17th, 2025
What if AI could actually remember like humans do? Biswaroop Bhattacharjee joins Demetrios Brinkmann to challenge how we think about memory in AI. From building Cortex—a system inspired by human cognition—to exploring whether AI should forget, this conversation questions the limits of agentic memory and how far we should go in mimicking the mind.
# Agentic Memory
# AI Agents
# Cortex
Code of Conduct