MLOps Community
+00:00 GMT
Sign in or Join the community to continue

Seeing Like a Language Model

Posted Mar 04, 2024 | Views 635
# LLMs
# Embedding
# Notion
Share
speakers
avatar
Linus Lee
Research Engineer @ Notion

Linus is a Research Engineer at Notion prototyping new interfaces for augmenting our collaborative work and creativity with AI. He has spent the last few years experimenting with AI-augmented tools for thinking, like a canvas for exploring the latent space of neural networks and writing tools where ideas connect themselves. Before Notion, Linus spent a year as an independent researcher, during which he was Betaworks's first Researcher in Residence.

+ Read More
avatar
Demetrios Brinkmann
Chief Happiness Engineer @ MLOps Community

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.

+ Read More
SUMMARY

What do language models see when they read and generate text? What are the terms by which they process the world? In this talk, I'll share some encouraging updates on my continuing exploration of how embeddings represent meaning enabled by recent breakthroughs in interpretability research, and how these insights might help us build better capabilities for retrieval-augmented LLM systems and imagine more natural interfaces for reading and writing.

+ Read More
TRANSCRIPT

Seeing Like a Language Model

AI in Production

Slides: https://drive.google.com/file/d/1K-QMuGMBlaCclcMI3-mpjAb3XrP4LjNJ/view?usp=drive_link

Demetrios [00:00:05]: I'm going to bring onto the stage my man Linus for his second appearance at the AiM production conference. What's up, my man?

Linus Lee [00:00:17]: Hi. Hi. It's good to be here again, dude.

Demetrios [00:00:20]: I just gotta say, I still think about your first talk and sometimes I'll dream about it too, in a very good way. No nightmares. I'll dream about it. And then when I use notion, I notice the things that you talk about because when you were talking about it, I don't think you had implemented everything and especially not with the clicking and how you interact with LLMs through just clicking instead of chatting. And that for me was so revolutionary when you said it at the time and continues to be something that I kind of parrot so much because I'm like, chat is not the best way to interact with llms most of the time and we don't need to default to chat.

Linus Lee [00:01:09]: Yeah, hopefully we'll talk more about that today and I will continue to try to keep it in the good kind of dream zone, not the nightmare.

Demetrios [00:01:19]: Yeah, I hope so too. So I'll let you get rocking. Do you have a screen to share or anything?

Linus Lee [00:01:26]: Do it.

Demetrios [00:01:27]: All right, man, this is exciting. For those that are interested, Linus is going to be talking to us today about seeing like a model. Here we go. Boom, shockalaka. I'm going to get this evaluation survey out of here. I'm going to jump off and I'll see you in like 2025 minutes.

Linus Lee [00:01:44]: Sounds good. All right. I want to talk about seeing like a model, and I guess we'll see what that means as we go on. But first, a little bit about me. I'm Linus. I work on AI at notion as part of the AI team there. I've been here for about a year and a half, and before that I spent a lot of time working independently, exploring ideas in the space of prototyping with knowledge, tools, tools for thought. Ways people interact with text, mostly text, some images and other kinds of media, and trying out all different kinds of ways that people use AI to interact with information in ways that are more than just kind of textual back and forth or the traditional ways of reading.

Linus Lee [00:02:26]: Before that, I had some brief stints at companies like betaworks and ideaflow, but all along my focus has been how can we improve the way that people interact with text? And though it's a little bit speculative what I'm going to talk about today, I want to continue that discussion and share a little bit of what I'm thinking about, what kinds of future interfaces might be possible with language models, not just better versions of chat and better versions of what we have today, but what might be uniquely possible with language models and better generative models that weren't possible before. And to begin, before we get into the meat of the matter, I want to talk a little bit about where this comes from and what we've built at notion. So at notion, we have a product called notion AI, which helps you work within your knowledge base. We have three big categories of products. We have one that we internally call the AI writer, which helps you in a kind of conversational way, summarize and sort through and revise content when you're writing a document. In notion, we have a feature called AI autofill that brings that capability into structured databases. So you can have a database of interviews or database of companies, or database of tasks or projects or topics, and allow the AI to fill out entire rows or columns of data in a large database and keep them up to date. And most recently, last November, we launched a beta of a feature we call Q A, which brings in retrieval augmented generation into your notion workspace, or notion knowledge base.

Linus Lee [00:03:48]: So you can ask it any kind of question about the content you have inside notion. And we'll try to use a language model to answer that question. And there's more that's coming down the pipeline. But this kind of gestures at this sphere of features that are enabled by a language model that really understands and can work with all kinds of structured and unstructured information inside a notion workspace. But these are things that are possible to imagine now and sort of build now. And I want to talk a little bit about what might be possible in the future as we get a better understanding of how these models work. And to do that, I'm going to start by talking about how we represent information. This is an IBM 14 one, IBM 14 one.

Linus Lee [00:04:31]: The whole thing that you see in the picture is a single computer. And it was one of the most popular computers back in the 1960s, around 1965, I think it was something like a majority of all computer deployments in the enterprise, in the country. And the reason it was so popular is because it really ushered in this era of stored program computing, where instead of having a single machine or a single sort of program that you feed in to do any particular task, the model could hold programs in memory and work more flexibly. But it was still in the sort of punch card era. And a lot of things were very strange about this computer. One of the things that made this computer unique is especially computers now, is that the 14 one is a decimal based computer in that it stored numbers not in terms of zeros and ones, but in terms of zeros, ones, all the way up to nines. And the fact that the computer worked in a decimal system made it really useful for certain kinds of purposes, like finance, where precision is paramount. But it also just added a lot of complexity.

Linus Lee [00:05:29]: If you're dealing with five times as many digits, you have to write more complex algorithms, have more complex circuitry. And there's a reason why we don't have decimal based computers anymore. And instead, we emulate a lot of other things in software. All computers today work in a binary system, which improves interoperability. It improves the way that we reason about precision, and moves a lot of complexity from hardware into software, where things are a lot easier to work with. And so this is, I think, is an interesting example of different kinds of representations of numbers kind of battling it out over the years, and the simpler, more modular, more compatible kind of representation winning in the market over time. Another example of a very different kind of representation of something is color. This is a color picker.

Linus Lee [00:06:17]: There are very many different ways to represent color in a piece of software, right? There's the color itself, which you can show on the screen. You could have a user kind of drag around this picker on the square. You could have numerical representations of colors. You could have RGB, which most people are probably most familiar with. You could have HSL, which is hue, saturation, brightness. There are other kinds of color models that are used for different purposes and different ways of representing color. Different numerical models of color are good for different things. HSL, for example, or HSB G.

Linus Lee [00:06:50]: Saturation brightness is useful if you want to think about hue, what color the color is as a separate axis from brightness, which is something you can't do in RGB, and in this other color space. In the HSL color space, you can talk about the concept of making something brighter, or making something more saturated, or making something more red, which is a different set of operations than the operations that are easiest in, say, the CMYK color space or the RGB color space. So the representation that you use to represent color kind of impact the space of operations that you can imagine and the kinds of operations you can intuitively expose to the user who wants to really understand and work with color. And so in a lot of professional graphical editing applications, you'll see different representations of color. Lastly, my favorite example of an interesting kind of representation is spectrograms. When you work with sound, the sound or music in its raw form is just kind of vibrations in the air. It's vibrations over time. But spectrogram is a way, a spectrogram is a way of visualizing sound in a different space.

Linus Lee [00:07:53]: So instead of just looking at amplitude over time, you're now looking at kind of frequency domain breakdown. So in this image, for example, you see in each track, the bottom part of the track represents the lower frequencies. And as you go up higher to higher and higher kind of rows in the image, they represent higher frequencies. And you can see, for example, where the bass comes in, and where the overall tone of the music shifts to be lower, or shifts to be more full, or where the upper range goes higher. And working music professionals working in production will often work with sound in the frequency space rather than in the raw waveform space. In the raw waveform space, the sort of most intuitive operations are things like making something louder or making something softer, which is not that useful. But in a frequency domain, you can apply more sophisticated operations intuitively, like you can talk about bringing out the mids where the voices are, or you can talk about compressing the lows, or you can talk about more complex operations that operate with sound on the sound waves, but in a way that's different than the operations that you can do in the raw form. So this is an example of an alternative representation of something that enables new ways of working with the thing.

Linus Lee [00:09:07]: And in all of these forms of representation, alternate forms of representation, I think there's one commonality, which is that these useful representations that enable new operations are designed from a really deep understanding of the thing they represent, whether it's music, scientifically, what is music, or what is sound, or what is color. Our color models are based on a deep science of both light and human perception of light. And these representations also evolved a lot through use and context, whether in music production or in photo editing or text editing. And so being interested in text, the question that this raises for me is, can we discover new useful representations for other domains like text? Mechanically, can we automatically try to discover and improve the way we represent information? And I think language models present an opportunity for us to do that. So that brings us to language models and their intelligence. I have a hypothesis about why language models work so well in the things they do, which is that I think neural language models work so well because they learn a more useful representation, an alternative representation of language in the latent space of the models that are more useful than just raw words and tokens that humans work in. I think it's less likely that there's any kind of like magic algorithm that works with tokens. But more likely that complex thought becomes simpler operations in this alternative bright representation, this kind of geometric or spatial representation that language models develop inside the latent spaces of these models, which motivates me to study exactly how language models represent information inside the models.

Linus Lee [00:10:36]: And there's a branch of machine learning that obviously studies this called interpretability, which is where I'll draw a lot of what comes next. If you look inside models, there's some evidence already, especially in other modalities of machine learning models learning useful or interesting representations. For example, in this work called an overview of early vision, in inception v one, they find specific neurons in a small, by modern standards vision model that seem to correspond to human interpretable features. So you can see here there are neurons that respond to certain kinds of textures, certain kinds of forms of color contrasts, high frequency noise, low frequency noise, centering brightness, things like that. And you can imagine this as a kind of way that the model uses an alternative representation that the model uses to represent what it's looking at, the individual features that it's using to understand what it's looking at. Another example that comes from another distiller article called artificial intelligence, to augment human intelligence, trains a little mini, tiny vision model that understands fonts. And so in this case, you can see the model has kind of learned human interpretable properties, like boldness and italic. And you can see in the geometric kind of latent space of the model, exactly what directions correspond to these different attributes that we, as humans intuitively understand.

Linus Lee [00:11:52]: I've done some work in this space as well, and I talked about a bit of this last year. But you can find even in more modern generative models, in text models and language models, and in image models like clip, you can find directions that correspond to art style or to location or content of an image, or even the semantics, the sentiment, or the topic of a particular piece of text. And you can move through that space and find and manipulate specific features by moving through this kind of latent space of a model. And so there are clearly interesting representations inside the model that we can learn, and maybe we can do better at learning them. And so embeddings and other kinds of latent spaces show us what the model sees in a sample. And if we can read out what it sees, and if we can control what it creates in the intermediate layers of the model, maybe that can allow us to do more interesting operations on text and language and images than what we can do today with prompts and so there's an interesting solution that came up in the last year that might allow us to build kind of dramatically better forms of this kind of interface. They're based on these three, among other major works that I won't get too deep into. But the basic idea behind the techniques that these papers propose is that if we assume two things, one, that over the course of training, models learn a huge array of different features, features like topics and grammatical structure, and other kinds of patterns and sentiment, and how these things interact with each other.

Linus Lee [00:13:22]: And two, if we assume that each particular example only activates a few of these features. For example, a particular text can't simultaneously be a Python program and a romance novel. Under these two assumptions, there are certain kinds of secondary models that we can build that try to use these assumptions to its advantage, to kind of decode out specific features that the model may have learned about its input domain, about text or images. And so I've applied this kind of technique to a particular embedding model, and this is kind of an overview that I've built for myself to try to understand that space. So on the left side here you see a list of features that this technique has found in this particular embedding model. And in the middle you see a GPT four generated label for what that feature might correspond to. And down here you can see specific examples from the training set of the model that really highly activate this particular feature in the embedding space. One way to interpret this is that these are all of the kind of features that the model knows to recognize in an input, and the terms of meaning by which the model studies or understands the input domain.

Linus Lee [00:14:32]: And then these are the examples that demonstrate the most of that particular feature. You can see, for example, here there's a bunch of features about specific kinds of topics, about biology and medical terminology, about medicine, chemistry, law, and legal contexts. There's also interesting features that are grammatical, like the use of a transitional phrase or first person narration. There's also a lot, much more specific features, like the presence of the number 1112 or 13 in a particular text. And in general, you find that as you go to larger and larger models with more expressive embedding spaces, the features that the spaces are capable of encoding become a lot more detailed, like this particular kind of number feature. So one thing we can do with this tool is we can take some example piece of text, like this paragraph that I have about Taylor Swift. And I can ask the model, when you process this input, what are the specific features that get turned on? What are the features that you're using to understand and model this particular input. In this case, the model sees lists or series because there's a bunch of lists of songs and lists of genres.

Linus Lee [00:15:31]: In here. You can see that the model sees that there's dates and timestamps because it mentions December 13, which is her birthday, specific dates, presence of female pronouns. These seem kind of like sensible features that the model has found in our input. The most interesting thing you can do with this, though, is not just looking at what the model sees, but controlling it. So you can go down here and I can, for example, select this feature. And now what we're going to do is we're going to take this input and put it through the model. And somewhere in the middle of the model, we're going to flip this feature from off to on. And you can see that when we turn this feature on here, we get mostly the same paragraph, except now the topic has switched from music to something in between music and law.

Linus Lee [00:16:13]: And here, one way to think about this is that we are with a better understanding of how the model is representing text representing this particular paragraph, we can go in and we can now make more precise edits to that representation in the same way that the model is thinking about the input. In the same way that a musician might edit music in the frequency domain, we are now editing text in the domain of features rather than in the domain of tokens. There are other kinds of interesting edits that we can make. Like there's a feature that corresponds to the presence of a question. And if we go here, we can see that when we run the model through and turn this feature on here, there is no question to begin with. But now here, this text is phrased kind of like a question. If we run it again, we'll get other variations of the same paragraph, but sort of phrased like a question. There we go.

Linus Lee [00:17:02]: So that's interesting. We can kind of study the geometric space of features that the model is using to model data. I think there are a lot of other kinds of interfaces you can build with now, this better understanding of how the model is understanding its data. One very trivial example that I've built as a demo here is this kind of highlighting tool. This is the first page or so of a novel called neuromancer. And what I've done here is I've asked the model, look at all the sentences in this input and tell me what the most prominently represented features are that you see in this input. And here are all the features that the model found to be most important in this text. This one indicates presence of dialogue.

Linus Lee [00:17:45]: And you can see that I've highlighted which sentences activate the feature the most strongly. And here are all of the dialogue passages. In this text, you can see presence of first person narration as a kind of heat map. Over this text, you can see there are other interesting things, like presence of physical interactions between characters, or movement, or position or change when people are nudging or moving or walking. There are also other interesting features, like the presence of a question. Negation is interesting. It only really turns on when there is a knot in this passage. And so that's an example of something you can build in this kind of feature space representation of text.

Linus Lee [00:18:31]: So latent spaces, like embedding spaces, appear to encode, interpretable, controllable representations of the model's input and output. That I think is a really promising place to start for us to try to manipulate text or other kinds of modalities, like images, in a more interesting kind of semantically powerful representation space. And I think one thing that's very interesting about this technique is that it can surprise us about the internal structure of language models and how they work. A lot of other approaches of controlling language models start with us assuming what the model knows, us demanding the model be capable of doing something. But in this case, we can ask the model, what are the things that you see? And I think that's a really subtle but interesting and motivating point about using this particular technique for language models. There's, of course, a lot of room for future work here. Work on improving the quality of features that you get out of this with better dictionary learning applications to other modalities and models, like more popular embedding models and clip models. And most interesting for me, other kinds of interface possibilities.

Linus Lee [00:19:27]: And I want to end by touching on some of the interface possibilities that I think are most interesting here. So let's talk about what interfaces you might be able to build here. The first one that I'll talk about is one that I just showed you, which is a kind of heat map over text, so that instead of reading every word or every sentence or paragraph, you could kind of zoom out and get a sense of, okay, what's going on in this part of text versus that part of text. Another way to look at this data might be a spectrogram overlaid on top of, like, a scroll bar, so that if you imagine reading a news article, maybe there is a little heat map on the right that tells you, here's a bunch of sentences that are more opinions. And here's a bunch of sentences that are more about facts. Or maybe here's generally where the political sentences are versus where the sentences about finance and economics are. You can imagine other versions of this are useful for consuming other kinds of media, but now you're starting to visualize and see the content in terms of a kind of different way of representing the meaning rather than just the text. I think this representation, the semantic representation of text, also enables other interesting possibilities for writing, like semantic copy pastes.

Linus Lee [00:20:31]: So instead of clicking on some text and seeing bold, italic and underlying, maybe you want to see a bunch of levers that correspond to specific tone. Or maybe you can copy just the tone features of a particular text and paste it, or just the topic feature of a particular text and paste it on some other text. And editing, editing in semantic space. I think there's also lots of really interesting interface patterns that exist in more powerful creative tools in other domains that have already evolved with more interesting representations, like music production here. And maybe we were able to adopt them into text. So, for example, this is logic pro on the iPad, and here you see this kind of track view, where each instrument is separated out into a separate track. And maybe there's a writing interface that could be interesting to explore, where the topic versus the tone versus the level of technicality are separated out into separate tracks. And when you hit compile, it produces an essay for you that reflects all of those different kinds of features over the course of a particular passage of text.

Linus Lee [00:21:31]: And lastly, maybe you can share specific features or specific packages of features as a style patch. So you can say, here's my writing style, here are these features and exactly the values that need to dial into your model. And someone else can take that and apply that to their writing. So you can share and operate in a kind of semantic edit space, rather than just by tokens and words. And I think all through this exploration, there is one thing that I'm really trying to build towards, which is that text. Working with text is very different than working with physical materials. Physical materials are full of rich texture and detail that communicate to the user how they can be used. Physical materials show us what they're made of and how they can be composed together, fit together, and text really doesn't have any of that kind of affordance.

Linus Lee [00:22:11]: And so I'm interested in changing the way that people interact with text to be a little more like, maybe in an ideal world, you can see a paragraph and immediately grock, oh, that's about x, and the style is y. And this is similar to this other thing that I saw, because they have the same color or the same shape. Just moving our representations more towards something that you or I can grock immediately instead of having to read every single token or word. Ultimately, I think the purpose of building these productivity tools and tools for reading and writing is to bring thought outside the head, as Bret Frichter said, to represent concepts in a form that can be seen with the senses and manipulated with the body directly. And I think as we understand models deeper, if we can use them to understand our domain and find different representations, more humane representations for information that we're dealing with, that could open up a really exciting new age of exploring different interfaces, working with ideas. So thank you.

Demetrios [00:23:06]: Holy smokes, man. You did it again. You landed the plane at the end there. Oh, my God, what a talk. I'm literally trying to wrap my head around this idea, but copying the tone instead of copying the words just had me going like, oh, my God. What? This is nuts. This is so nuts. And then the patches.

Linus Lee [00:23:35]: What? Yeah, I think all of those are examples of things that we take for granted in other creative domains where we have an easier time working with those representations. Like, if you're in PowerPoint or figma, you can copy style and paste it onto another shape. So why haven't we really done that for text or images? It's because those materials are just harder to work with. But maybe models give us the right tools and the right presentations to be able to work with that.

Demetrios [00:23:59]: Oh, my God. As people are blowing up the chat, they are saying, beautiful work. This is inspiring. Wow. Yes. Minds are blown right now. So there was a lot of questions that came through when you were going through your own tool, I think, that you built. Is it open source?

Linus Lee [00:24:20]: First of all, it's not quite open source for mostly kind of dumb, trivial reasons that are more logistics than technology, but it is public, so you can go and try it. It's at Linus zone prism. Prism is the name of the tool because it spreads out a piece of text in a spectrum of color, kind of. But if you go to lineups, zone prism, you'll find the tool. It's a bit slow currently, but you can play with it.

Demetrios [00:24:51]: Dude, I'm still wrapping my head around this and the prism again, it's a very visual representation of what you're trying to do with text. You're saying text is much more than just. And also, like voice too, right? This can be applied to that, I think, in a way where you can.

Linus Lee [00:25:12]: Images, I think as well.

Demetrios [00:25:14]: Yeah. Images also. And I see where you're going. In the beginning, I was kind of like, why is he talking about, I love histograms, and I also love, what was the other one, the spectrogram. I love using those and videos and stuff and all that, but I was sitting there like, where is he going with this? I don't understand how this relates at all to language models. And then you bring it back and you ground back in. Okay, how can we have histograms or spectrograms for language or for. If it is AI generated content, how can we make sure that we can fiddle with every knob that we have possible, if that's what I'm understanding right.

Demetrios [00:25:59]: It's like, what are the different knobs that we can turn? And let's break those out in very granular fashion.

Linus Lee [00:26:07]: Not only granular, but also human interpretable. So instead of having to look at how these 4000 neurons respond to different kinds of meetings, we can actually have a space where that's aligned between the model and the human, where we say, okay, these are the knobs. Here are the knobs in the terms that make sense to me as a creative or as a writer or as a decision maker.

Demetrios [00:26:28]: This is so wild. This is so wild. I'm so thankful that you came on here and you presented this to us. I don't even know if there's questions coming through the chat, but it's basically everybody just going like, this is amazing. This is excellent, cool stuff. Thank you. Very well done. So I'm going to let people ask their questions.

Demetrios [00:26:52]: There is one that's in here. How do you find the features activated during the text generation? That was through an embedding model, right?

Linus Lee [00:27:02]: Yeah. I didn't have a lot of time to go through the exact technique. If you google sparse auto encoders or dictionary learning for interpretability, those are like the keywords to look for if you want to read more on it. But the basic intuition is you generate a lot of data about internal activations of models and train a secondary model that tries to train on that data and tries to find the minimum number of features per input that can most accurately reconstruct the output. And it turns out that's enough of a constraint such that when you optimize the model to do that, the model learns the knobs, but they're not labeled. The model is sort of like, here are all the knobs that I think the model is using. And then there's a secondary process called automated interpretability, where you use a language model to look at the top, activating examples for each knob, and say, okay, this is probably what that knob is controlling.

Demetrios [00:27:54]: Okay, interesting. So another question that came through, do you have favorite open source interpretability ideas or tools or things that you.

Linus Lee [00:28:16]: So most of the interoperability kind of subfield, I think, is on the Pytorch hugging face transformers ecosystem. If you look at research work, there's a library called transformer lens that a researcher named Neil Nando wrote for mechanistic interval, specifically of looking at the microscopic details of models. But in general, I think that the most interesting place to start, if you're interested in the kind of stuff would be especially related to this, is there's a technique called that people have come to call representation engineering, which is a bit like this. But instead of discovering features automatically, you can use examples to find directions in models that do interesting things. Like, there's a good work called inference time intervention ITI that found a way to intervene inside the model at inference time to make the model more truthful in a very precise kind of research way. Still a lot of caveats, but that's the kind of work that's happening. And yeah, there's a lot of resources you can find if you look into interpretability.

Demetrios [00:29:16]: Dude, this is cool. I think you can officially do the mic drop now if you want and say, I'm out.

Linus Lee [00:29:24]: Peace.

Demetrios [00:29:25]: Because what? Like, I'm still God, man? I'm going to be three months later, five months later, when you do the next talk, I'm going to be just wrapping my head around this talk, and you're going to come out with something new. This is absolutely amazing. We've got a question from Roman asking about, are the features the same across different language models or versions of models?

Linus Lee [00:29:50]: Yes, very interesting question. Generally they are. This is, again, something that, as a community, we're learning more about, but it seems like generally they are, especially if you're trained on similar kinds of data. But if you imagine that the model is kind of lazy about what it learns and tries to learn things in the easiest kind of order, where it learns the easiest things to learn first and then the harder things, then you could imagine that different models learning in the same data or similar looking data will learn easier things, like grammar first before it learns grammar, and topic first before it learns, for example, what is true and what is not about the world.

Demetrios [00:30:23]: Okay, so I just have to ask. It feels like you personally learn or think very much visually in visual representations when you're dealing with this stuff, and it's almost as if you're trying to bring out from the computer the obstacles that we have with dealing with just a screen. Is that a good way of putting it, or can you give us a little bit of insight into your mind and how you think?

Linus Lee [00:30:59]: Because this is, wow, that resonates. I think one way that I've thought about the job of a good interface is a good interface enables two things. It either lets you express yourself more frictionlessly, more clearly, precisely, or it lets you see the thing that you're trying to see for what it really is. So if you're looking at a kind of collection of statistics, then a data table is a better way to see the data than a paragraph, and a histogram is a better way to see the data than a data table. And so the closer you can get to what does it feel like to actually see what's going on? And sometimes, depending on the use case, there might be different ways of packaging that, that let you see different perspectives, hence different representations. But ultimately, I think whenever I look at an interface representation, I'm always thinking, what is the thing that I'm actually trying to see here? Get the user to be able to see and understand and grok here? And then what's the best representation or perspective that I can spin this in so that that's most obvious?

Demetrios [00:31:53]: And how does the seed of where the technology is at versus what you're trying to accomplish? How does that interplay with each other?

Linus Lee [00:32:04]: I think obviously, interpretability is still very early. And so I think we're at the stage as a community where, and I've only been in the subfield for a very little bit, but I think we're at the stage where the techniques that we can be optimistic about scaling to sort of real models are starting to emerge. And there's a lot of work going on in this space to try to apply the things that I just talked about to more state of the art models. And so on the one hand you have things that are more bottom up, scaling up those approaches, and on the other hand you have top down approaches, like taking larger language models and trying to find specific interesting directions, whether it's for truthfulness or for specific style or topic control. And at some point I think they all meet. And then as this research foundation at the MLA is happening and becoming easier and more accessible to play with, I think we'll see lots more interface declaration, hopefully, for how to visualize and how to let people control that.

Demetrios [00:32:58]: Well, I love it. Man, there's so many people that are asking about your blog and so we'll share your blog. I also highly recommend giving Linus a follow on Twitter where you tend to post your most up to date ideas and all that fun stuff. This was really cool, dude. This was really cool. As a be back as a wannabe artist, this really resonated with me. I think there are so many different cool ideas that we can take from this and it's exciting. Man, you got me all jazzed up.

Linus Lee [00:33:32]: Me too. We'll see what happens in the space. And as always, it's a pleasure to be here close.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

1:14:53
Large Language Model at Scale
Posted May 13, 2023 | Views 850
# Large Language Models
# LLM in Production
# Cohere.ai
Prompt Injection Game - Large Language Model Challenge
Posted Apr 18, 2023 | Views 1.2K
# Large Language Models
# LLM in Production
# Prompt Injection Game
DSPy: Transforming Language Model Calls into Smart Pipelines
Posted Dec 05, 2023 | Views 777
# DSPy
# LLMs
# ML Pipelines