MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Evaluating LLMs for AI Risk

Posted Oct 31, 2023 | Views 873
# LLMs Evaluation
# AI Risk
# Robust Intelligence
Share
speakers
avatar
Finn Howell
Machine Learning Engineer @ Robust Intelligence

Finn, a ML Engineering Manager at Robust Intelligence, leads a team focused on detecting and mitigating AI risk. Previously, she built the EHR backend and clinical NLP models at One Medical, contributing to key innovations during their IPO and rapid growth period. She is passionate about advancing AI to benefit society while mitigating harms.

+ Read More
avatar
Adam Becker
IRL @ MLOps Community

I'm a tech entrepreneur and I spent the last decade founding companies that drive societal change.

I am now building Deep Matter, a startup still in stealth mode...

I was most recently building Telepath, the world's most developer-friendly machine learning platform. Throughout my previous projects, I had learned that building machine learning powered applications is hard - especially hard when you don't have a background in data science. I believe that this is choking innovation, especially in industries that can't support large data teams.

For example, I previously co-founded Call Time AI, where we used Artificial Intelligence to assemble and study the largest database of political contributions. The company powered progressive campaigns from school board to the Presidency. As of October, 2020, we helped Democrats raise tens of millions of dollars. In April of 2021, we sold Call Time to Political Data Inc.. Our success, in large part, is due to our ability to productionize machine learning.

I believe that knowledge is unbounded, and that everything that is not forbidden by laws of nature is achievable, given the right knowledge. This holds immense promise for the future of intelligence and therefore for the future of well-being. I believe that the process of mining knowledge should be done honestly and responsibly, and that wielding it should be done with care. I co-founded Telepath to give more tools to more people to access more knowledge.

I'm fascinated by the relationship between technology, science and history. I graduated from UC Berkeley with degrees in Astrophysics and Classics and have published several papers on those topics. I was previously a researcher at the Getty Villa where I wrote about Ancient Greek math and at the Weizmann Institute, where I researched supernovae.

I currently live in New York City. I enjoy advising startups, thinking about how they can make for an excellent vehicle for addressing the Israeli-Palestinian conflict, and hearing from random folks who stumble on my LinkedIn profile. Reach out, friend!

+ Read More
SUMMARY

How do you write a stress test for an LLM? This talk explores cutting-edge techniques to red-team generative AI and build validation engines that algorithmically probe models for security, ethics, and safety issues. Attendees will learn a framework to manage AI risk spanning the model lifecycle, from data collection through production.

+ Read More

Watch More

49:50
Evaluating LLM-based Applications
Posted Jun 20, 2023 | Views 2.3K
# LLM in Production
# LLM-based Applications
# Redis.io
# Gantry.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Zilliz.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Premai.io
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io
Emerging Patterns for LLMs in Production
Posted Apr 27, 2023 | Views 2.2K
# LLM
# LLM in Production
# In-Stealth
# Rungalileo.io
# Snorkel.ai
# Wandb.ai
# Tecton.ai
# Petuum.com
# mckinsey.com/quantumblack
# Wallaroo.ai
# Union.ai
# Redis.com
# Alphasignal.ai
# Bigbraindaily.com
# Turningpost.com