arrowspace: Vector Spaces and Graph Wiring
Speakers

With over a decade of experience in software and data engineering across startups and early-stage projects, Lorenzo has recently turned his focus to the AI-assisted movement to automate software and data operations. He has contributed to and founded projects within various open-source communities, including work with Summer of Code, where he focused on the Semantic Web and REST APIs.
A strong enthusiast of Python and Rust, he develops tools centered around LLMs and agentic systems. He is a maintainer of the SmartCore ML library, as well as the creator of Arrowspace and the Topological Transformer.

At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter.
SUMMARY
Meet arrowspace — an open-source library for curating and understanding LLM datasets across the entire lifecycle, from pre-training to inference.
Instead of treating embeddings as static vectors, arrowspace turns them into graphs (“graph wiring”) so you can explore structure, not just similarity. That unlocks smarter RAG search (beyond basic semantic matching), dataset fingerprinting, and deeper insights into how different datasets behave.
You can compare datasets, predict how changes will affect performance, detect drift early, and even safely mix data sources while measuring outcomes.
In short: arrowspace helps you see your data — and make better decisions because of it.
TRANSCRIPT
Lorenzo Moriondo: [00:00:00] Once you have that evaluation, you have classification, you have search, you have a new set of tools that you can, that you can, uh, implement with this, right? And that's what graph wiring is about. Uh, what do we do with this? Super nice, super new, cool tools provided by a p plexity, uh, to actually supervise, manage, uh, curate data sets.
Lorenzo Moriondo: For machine learning operations and large language model operation,
Demetrios: you've been working on a lot over the last years. Can you break down for the listener, what you've been sinking your teeth into?
Lorenzo Moriondo: Yeah, absolutely. Very, very briefly. It is like, uh, I've been actually experimenting a lot with drugs, uh, since like, um. March last year, and I've run, uh, into some limitation of vector search.
Lorenzo Moriondo: But then like as I am like quite, [00:01:00] um, the rabbit or person, like probably a lot of developers and engineers around, I started to dig a little bit into the limitation of, um, vector similarity in general. So that's how I've actually started this, um, project and published this library called arrowspace. That is an attempt to make vector search more accessible and more powerful in some sense, especially in the scope of, uh, text embeddings.
Lorenzo Moriondo: So like, uh, vector space, but not just like vector space, but vector space with, uh, highly, uh, semantical. Um. Uh, connections. So text and bendings, uh, for language, uh, but also like. General, any kind of vector space, uh, that has an highly component of connectivity between the features in the beddings. And then, yeah, I went, I went through with this and I started writing a few papers.
Lorenzo Moriondo: Now I am at the fifth papers [00:02:00] since October. I wrote two papers last year and three more now, especially the last one. I'm very like, uh, happy with that because I draw this connection between. This, uh, way of doing vector search and, uh, what I call graph wiring. That is by basically the generic, uh, application of this particular vector search and the way of measuring information that, uh, that is now like, um, being proposed in January that is called the Pex P Plexity.
Lorenzo Moriondo: That is basically, uh, a new way of looking into, um. Uh, entropy and complexity. Mm. So like this, this, this jo, this, uh, this journey brought me from like a vector search, like trying to, uh, apply, uh, all these tools, uh, connected to rack, to like some more like interesting, like more like higher level kind of concept, but also like, uh, I'm trying now to build a set of tools based, based on these foundations.[00:03:00]
Lorenzo Moriondo: And I think they will be like very, very, uh, helpful for everybody doing, uh. Language model operations, but also like engineer machine learning, uh, operations.
Demetrios: Okay, so first of all, amazing name with Epi Plexity.
Lorenzo Moriondo: Well, it's not mine. It's a paper from, from the University of New York and Carnegie, Melbourne.
Demetrios: Wow.
Lorenzo Moriondo: So, uh, incredibly, the, the, I I, I took the, the, all the theoretical part from them and I, I. Just wired in what I've been working on and the things just like align perfectly. So I think we were like on the same wave somehow. We were like, just like surfing in parallel at some point. And in January, my paper on, um, on like one of my moonshot called the topological transformers, and this epi paper just came out at, almost at the same time.
Lorenzo Moriondo: And um, I would say, oh look, this is what I'm doing and yeah, okay. I have a measure for it, a perplexity. Oh, that's great.
Demetrios: So what [00:04:00] was hard about Vector search and what did you do to change it?
Lorenzo Moriondo: Vector search has been developed in the years as, um, as a poorly geometrical kind of operation. So you have like, um.
Lorenzo Moriondo: I dimensional vectors, you just compute the distance between these two vector. You use your own like, uh, favorite distance metrics that can be cosign, L two or any other kind of metrics, and you got a score out of it, right? So these two vectors are, uh, are X and uh, X and z, uh, distance, right? Uh, there are like, um, applying this to semantically, uh, dense, what I call semantically dense, uh, embeddings is that, uh.
Lorenzo Moriondo: Some of the information that, that I demonstrated being part of the AP plexity framework, uh, basically get lost in the, in the folding or in the, like reduction of like, uh, the vector space into a geometrical space. There is [00:05:00] some part of structural information that is encoded in my, in my, in my implementation, in my algorithm.
Lorenzo Moriondo: By the graph described by the relationship between the features. So you have column vectors, you, you compute the, the connection between these column vectors in the vector space and you got this extra that I call topologically or spec or spectral information. And uh, yeah, it's, uh, I went through all these experiments and basically I found out in the end that it's demonstrated in the, um.
Lorenzo Moriondo: The last three paper I published, uh, that, uh, that I hope everybody, I mean, of some people is going to, uh, to double check. Uh, and so I can, I can get some very good feedback that basically some of, some, some piece of information during the bad process gets lost or in like dimensionality reduction or like in a, in the way, like, um, the, the betting process works by.
Lorenzo Moriondo: [00:06:00] Basically computing only on the, on the item space and totally forgetting the feature space of the, uh, um, of the dataset. And that basically this can be fixed for geometric, uh, for geo for this like, uh, geometrical vectors per geometrical embeddings by increasing, uh, the number of dimension of the embeddings.
Lorenzo Moriondo: So if I started from 334, uh, dimension embedding, so it is like a, usually like a kind of, uh, usual denoising out encoder. I, I, I actually run my test comparing, uh, search using like cosign search on this limited space embeddings and arrowspace, my library in the same space. Right. And I, and I actually found out that, uh, there is some, there is some way of doing this search better because there is some, some kind of.
Lorenzo Moriondo: Chunk of information called that I call like topological information that get, get lost. So applying topological search to the same space, I can achieve better. Uh, [00:07:00] uh, there are different scores for that. Uh, in general, some kind of better search in, um, in a more like semantically meaningful, uh, search. I developed a score for, for this that is called uh, MRR top zero.
Lorenzo Moriondo: That is basically. Uh, subtitle, the topological page rank, that is a way of measuring these things. So I measure these things in the geome search and in the arrowspace search, topological search, and I found this gap recently. I've done the same thing like with larger embedding. So I've done the result. The same thing with that.
Lorenzo Moriondo: Uh, a 10 24, uh, dimensional embeddings and what it came out is that, ah, look, this piece of information that got lost on the 304 outta the noise, uh, uh, the noising out encoder is actually preserved by this new way of doing embeddings. But with much higher dimension. So basically what was happening is that I could do the same level, the same quality of search of [00:08:00] 1024, uh, dimension embedding using the 384 embeddings because arrowspace basically rebuild somehow the information that got lost in the bending process, uh, through.
Lorenzo Moriondo: Through what Plexy measured has generated information. So basically I was generating information in the, in the, in the framework as, as it is, uh, explained by the framework of plexity. I was regenerating the information that was lost by the bedding process using my algorithm, arrowspace, and the outcome is.
Lorenzo Moriondo: arrowspace worked better in topological terms, both in the 384 and in 1024 dimension space. So yes, and it allowed basically a 384 space to almost work as a, as a, as a space that had three times the number of dimensions. It was like a very like, oh, kind of discover.
Demetrios: So this is [00:09:00] fascinating to me because it's almost like you are getting a bit of a cheat code.
Demetrios: You're getting this extra dimensionality for free. What are the downsides of it? Because I imagine there's no free lunch.
Lorenzo Moriondo: No, absolutely. Yeah.
Lorenzo Moriondo: Let, let, let's, let's take a step back in the general objective. The general objective was we want a better, more, uh, uh, feature, um, search to apply rags on, right? Because retrieval, augment generation depends a lot from searching among documents. So the initial hypothesis was that, uh, geometric search found.
Lorenzo Moriondo: Very well the first 3, 4, 5, uh, uh, top rank documents, but starts to fail very steeply after the fifth document, and that's demonstrated com with comparative test. Uh, I wrote different blog post about this because that's how it works. It just finds [00:10:00] very well the top 3, 4, 5, uh, documents. But then it's performance just like go down deeply for like the tail, uh, for the tail of the, of the, of the ranking.
Lorenzo Moriondo: So I said, okay, but retrieval, augmented retrieval needs something that is maybe as like a better, more smoother distribution, right? We don't want like to have peak performance on the first three and then Okay, the rest, whatever, whatever, whatever happens with the rest, it's okay. Right? No, because like RAG has to maybe walk down different pathways to do some kind of reasoning and some chain of thoughts.
Lorenzo Moriondo: So. You can start with like, with the top three, uh, ranking, uh, results and have a great, like, outcome. But then at some point you start looping and you start like get stuck in your local minimum because as many query you run on the, on the, on your documents, you always return the same, the same top 3, 4, 5, 4.
Lorenzo Moriondo: So there's, I said there should be a way to actually make this, the distribution of the ranking, like, uh, less like steep [00:11:00] and allow the search to, to look, uh. At the lower ranks so that if we, if you got stuck in the top ranks with the top ranks, the reason maybe got in this local minimum on the top ranks, just, okay, let's go and look at, at, at the lower ranks.
Lorenzo Moriondo: Right? So that's actually exactly what topological um, search does. There is these sliders. In which you can basically, uh, modulate how much, uh, geometric search and how much topological search you want. So if you want, like, okay, let's start with pur design search. Okay. Slider one. Uh, a hundred percent we look at the top three, top five, uh, uh, results.
Lorenzo Moriondo: But then maybe you want, okay, but let's try to open, uh, the, the reasoning, right? Let's try to, to, to look a little bit on the edge of the distribution a little bit, but just look at this tail. What happens if I, if I search like less? Popular, less popular or less like top ranking, uh, results, right? So you cannot adjust this lid and you go down to.
Lorenzo Moriondo: Almost like 40% of mix between [00:12:00] like 40%, uh, uh, design search and 60% topological search. And you start having like, very good, still very good results, but they somehow give a different point of view or a different, like, kind of approach to the problem. The the top three, top four rank. Right?
Demetrios: I like that.
Lorenzo Moriondo: And so you can just like.
Lorenzo Moriondo: Tell your LLMs to just through to a simple like MCP server. Oh, let's adjust my search. I mean, I am, I'm, I'm stuck in this loop at this point. Maybe I should adjust my search. Okay. Let, let's put this ladder down. Let's go, let's look more into topological kind of a search, and you just like start receiv.
Lorenzo Moriondo: Very still, like very highly related, highly meaningful documents for the contest, but different ones, right? So you can restart your chain of thoughts and say, okay, this way I, I reason this way, this way. I reason this other way. What are the common, uh, patterns in these things and so on.
Demetrios: It's a way to let the LLM keep exploring on relevant data.[00:13:00]
Lorenzo Moriondo: It is like when you do geometric, you basically go, uh, depth first. If you adjust the topology, you start opening the graph and starting to see like more like bread first, right?
Demetrios: Yeah. Now, you mentioned that there's maybe a way to implement this with MCP servers. Have you seen any of the vector stores or vector databases?
Demetrios: Already incorporate these algorithms into it. Would it be at that level of a vector store? Is it something that you add on top of it? Where does it play in the stack?
Lorenzo Moriondo: It would be just a different track for, for running search. You can have like search geometrical and search topological and you No, nobody implemented that.
Lorenzo Moriondo: I have like some, uh, side project trying to do these things like, uh, running comparative rugs. Uh, installation one using, um, traditional, uh, uh, geometric vector space search one using, uh, uh, mix it hybrid topological, geometric. And things like that. So we'll [00:14:00] see the results soon enough. I guess the fact is that Geome search is so well established and so well optimized.
Lorenzo Moriondo: That obviously is the first way to go for like, uh, 90% of a use case maybe right? Up to the point, you need some very good reasoning or you start having like very complex reasoning. That's the point we're, we're reaching now with drugs, right? So in the last 20 years or so, and the last, like even before, right?
Lorenzo Moriondo: Um, genetic search was perfectly good because, and it was like highly, and it became like highly optimiz with this like HNSW. Now we have like, so we have like these seven layers of graphs trying to, to look into like billions of records, right? And it's great, but maybe we, it's. To push these things a little bit more now because we have like more intelligence system to, to deal with.
Lorenzo Moriondo: Right. So that's my, that's my point of view.
Demetrios: Yeah. So is it, it's basically a, if I'm understanding you correctly, you want [00:15:00] to use the tried and true algorithms until you saturate them and then reach for something like this.
Lorenzo Moriondo: Uh, basically what I'm trying to do now in, in experimental terms, is that I add topological information to the geometric search because in the end it's like, uh, it's a way that some, right, you have like an alpha that is like the, the geome search, and you have a beta that is the, the topological search component, right?
Lorenzo Moriondo: So through, um, through the arrowspace to the library, you can modulate these things until you see that you reach like, uh. Something better. Maybe like just the marginal better that that is just like a marginal 5% of 8% that I demonstrated in, in, uh, in my blog post. But that. Kicks the, um, uh, the drugs out of his, of his, uh, local minimum, maybe.
Lorenzo Moriondo: Right. And, and allows him to actually, okay, let's climb up the, the local minimum. Uh, I, I'm, I'm running around now and let's [00:16:00] get in, maybe reach another local minimum that tells me something meaningful as well, and very much related, but different in a way that I can re elaborate my chain of thoughts in, in some different way.
Demetrios: So it's that. Razor thin edge because you don't want to introduce too much noise and then have irrelevant data come
Lorenzo Moriondo: through. The fact is the problem with with co, with joint similarity is that he has more noise. If, if it's run like, as a pure geometric search because you just get stuff that is similar geometrically, but not in meaning because there is no semantical, uh, kind of structure that tells, uh, oh, this is, this is very, this is geometrically very similar, but maybe it's, it's totally out of context, right?
Lorenzo Moriondo: That's why you get, like, if you run like a geometric search and you, and you ask for like the top 50 maybe records, the last 20 records are just like garbage documents. Or the last 2020 documents is just like garbage documents. Like, like gibberish documents, because [00:17:00] that maybe they just happen to be, uh, close in, in the, in the geometrical space, but they are semantically, uh, totally not relevant to what you are looking for.
Lorenzo Moriondo: Right? And that's, and that's what topological search fix in the first place, plus is allowed a better like, um, a context. There is a ratio that is, that di is that, um, the add tail ratio, right? In which like we measure, um, uh, the distance in, um, in terms of, uh, semantic, uh, semantic distance between the, the, the ad and the tails with with geometric, um, similarity.
Lorenzo Moriondo: This is always a balance. Like the ad that tells you are, are about something and the tales are about something else. With topological search, this thing is re balance. So the ad and the tales that talk the same semantic space, because we inject like semantic uh, um, information in the, in the search.
Demetrios: Hmm.
Demetrios: Now. How does this [00:18:00] play into memory? Because I know you've been going down the memory four agents route quite deeply also.
Lorenzo Moriondo: Yeah. Basically at some point this became something like, okay, maybe we can do, like, uh, search a memory in the same structure, in the same data structure, right? Because we, we basically.
Lorenzo Moriondo: Generate this, this graph out of the bed space. And this graph is a sort of permanent memory of the semantic space because basically it collects the in variance of, of this semantic space. So if you have like a bunch of legal documents about some particular kind of, or a bunch of uh, uh. Philosophical, uh, or papers about some kind of philo philosophical topic, right?
Lorenzo Moriondo: You build the, uh, the graph laplas out of this, out of this, uh, a bad space of this document and what the graph laplas represent. There is like, um, kind of, uh, a good kind of, uh, [00:19:00] background work on this, uh, uh, by all the, uh, all the researchers that work on, uh, Laia representation for, for vector spaces.
Lorenzo Moriondo: Basically represent the variance of this space. So if the text embeddings, if they, if the vector space is a text embeddings, so basically a reduction, uh, of all the, all the meaning in, in a, in a given field of study. The graph laplas automatically, it's a representation of, of, its in variance. So you have like that somehow, you get some, like long-term memory there.
Lorenzo Moriondo: You have like, uh, a summary of the summaries of all the documents you have in your vector space. All compressed in a spars matrix. Basically. This par bath is a very interesting structure. I talk, I talked about it extensively in my paper and in my blog post. He has incredible properties and it's heavily used, but.
Lorenzo Moriondo: In the item space of the vector space. So basically let's build the Atlassian on all the items of this vector [00:20:00] space. What arrowspace does, it flips the concept by saying, no, we, we, we want to look for in variance in the feature space. So we want to look for relationship between features. What is the relationship between the color of all these documents, the length of all these documents, the sentiment in all these documents, right?
Lorenzo Moriondo: Not just like what is the, the relation between the, the color and the sentiment, uh, of documents A and documents B. No, we want to look into the graph, the column vectors, the feature graph,
Demetrios: and you're doing it at a document level, not at a chunking level.
Lorenzo Moriondo: That really depends how you do your embeddings. Okay.
Lorenzo Moriondo: Yeah. If you chunk, if you chunk your documents in your embeddings, you will come out with, with a, with a band chunks, right? I usually, I started, the example I'd use is the CVA dataset that is, uh, a datasets of reports, of common vulnerabilities in, uh, in, uh, in software, uh, uh, and systems. Usually these are like [00:21:00] large reports in in Js om format.
Lorenzo Moriondo: So they're just basically text file with a title, a description. I just build my beddings by passing the entire document. But in theory, yeah, if you have like books you can just like, you know, like chunks the uh, uh. Into paragraphs at, at the time of the beddings. And you can do like, yeah, you can just say, okay, then yeah, you have to do the reconstruction.
Lorenzo Moriondo: Say which chunks belongs to the same book, which chunks belong to the same paragraph, which chunks belong, et cetera. But yeah, this works the same both for documents and chunks.
Demetrios: Okay, so sorry, I cut you off. I derailed you a little bit. You were talking about how it's flipping on it on its head, looking at the features as opposed to that, and so how can the features tell you information?
Demetrios: That's what I'm, I can't make the connection on
Lorenzo Moriondo: That's the real semantic metadata that you're looking for. Like when you look into what we, there is like a. An infinite number of definition for metadata, right? But in the knowledge graph [00:22:00] space, for example, that you know very well and your, your audience knows very well, uh, obviously.
Lorenzo Moriondo: What you have is like, it's a graph that goes on top of the existing graph. So you have the graph of the nodes, right? And then you have all the, the graph of your classes on top of the, on top of your, uh, graph of the nodes, right? And you call that met metadata. Is, is the relationship between class of documents is the properties that connects, uh, documents in a way defined by the class level, by the metadata level, not by the the instance item level.
Lorenzo Moriondo: Same thing for vector space. There is a space in the nodes that are, that is the space of the documents, the item space. But then there is a space that connects the features because each node as, as is, as is feature vectors. So we said. This node is represented. All the nodes of this space are represented by this feature vector, uh, for dimension zero, the color or the length.
Lorenzo Moriondo: But there is another [00:23:00] vector for the dimension five that is like the, another characteristics of the vector, how this features connects together. So you see that you go to a second order kind of layer. That is that, I call it metadata layer because that's exactly what it is conceptually, right? It turns out that there is information there that can be used for search because you don't only search the node, uh, space.
Lorenzo Moriondo: You search the feature space, you can search the feature space, and that's what mathematician called topological because this space gives you. Somehow the third dimension, the Mann space of the vector space, basically this is the space of the second derivative of, of the, of the vector space. And that's where a lot of cool stuff is because that's where like all the structural information.
Lorenzo Moriondo: Uh, is found, and that's the structural information arrowspace injecting your query to find, oh, this very super cool new results or results related to your query that are not geometrically closed, but [00:24:00] maybe they follow different pathways down this morphology and, and they're still related, but maybe forgotten by the geometric, uh, search.
Demetrios: So you're kind of blowing my mind here that that metadata. Has any value.
Lorenzo Moriondo: Exactly, that's what it was forgotten. Basically it was like left there like an archeological relic with no use, right? And no, it has, so that, I found this like last year as well myself, and it was like, ma, come on. This is not possible.
Lorenzo Moriondo: Then nobody looked into this thing, uh, before. Uh, and I said, uh. Okay, let's try, I mean, I, I have, um, I mean, I'm looking, I'm actually taking this, this time to, to walk down this, um, this rag vector space, uh, kind of problems. And, uh, and that's true. That is there and, uh, through the perplexity, uh, framework, I, I measured it like last month, I, I ended up measuring what was missing by, by the geometric [00:25:00] search.
Lorenzo Moriondo: It's, it's what is called, uh, uh, structural information by the Propex Framework. And it's measurable. It's almost like, uh, 20, 30% of the total information. So every time we, we run a search on the geometric search, we basically lost 20 50, 20 15, 20% of the information we could have used. That is there, but we, we do the, we didn't regenerate because we didn't rebuild the graph Laplacian Uh, this is mathematically solid as far as I could like, uh, um, investigate, and I'm very, very, I would be very, very helpful to anybody that will look into this to tell me, no, you're, you're not right. I will be like the, the happiest, uh, person if somebody founds the, found some fault in this reason. Uh, but at this point it looks like it works because I have, uh, run like tens of experiments.
Lorenzo Moriondo: Uh, I have run tens of pair in different settings, like again, like shrinking or increasing the number of dimensions of the embeddings, [00:26:00] and I hope, I hope to be right.
Demetrios: So that's where you're getting these dense vectors for free. Basically,
Lorenzo Moriondo: yeah, we're getting information out of the same dense vector space because we weren't looking in the feature space before.
Demetrios: Yeah. Okay. I'm, uh, I'm starting to wrap my head around it and then the epiplexity you should probably break down what that paper is and what your paper was. Because I know you're referencing it quite a bit, but I'm not super clear on exactly what it was.
Lorenzo Moriondo: I think nobody is, because it's such a new thing.
Lorenzo Moriondo: And, uh, I, I guess they have only one, uh, citation for now in, uh, in scholar, in Google Scholar because it's such a new thing. So I I, I don't know if I cannot really, maybe you can, you can actually ask as a guest, uh, uh, the people that wrote the paper. But my, my basic understanding is that, uh. Basically what Shannon [00:27:00] information measure is what?
Lorenzo Moriondo: What they called random entropy.
Demetrios: Mm-hmm.
Lorenzo Moriondo: But they said, look, if you, in general terms, like if the universe was a vector space, you will have, like you will, you will witness like Shannon entropy, but they say, look, but. Everybody that computes something is not like, uh, is, doesn't look at the entire universe.
Lorenzo Moriondo: It looks only at the problems that are in his, uh, computing, uh, capacity. So if I can compute up to certain, uh, level of, um, for, for up to a certain time with a certain amount of computing power, I can reach, I can compute this, this number of algorithms, right? It is true that there are infinite algorithm out there and they all like are under the law of, of entropy.
Lorenzo Moriondo: But if I actually, um, limit the, the investigation of only which algorithms we can compute that is related to what, [00:28:00] uh, wall from call, uh, computational, um. Boundedness, I can, I, I guess it is. So he said, uh, we, we can actually measure random, uh, entropy, but also something else called structural information.
Lorenzo Moriondo: And epiplexity is basically both of these thing, instead of just looking at a, a random, a random entropy. It looks both like bounding, uh, the computation, the possible computation, but to the observer, to, to the person that runs the computation and to, its like computational power. We can also, um, highlight and measure this other thing called, uh, structural information.
Lorenzo Moriondo: So, and in this, like, in this like, uh, framework, the graph laplacian in my arrowspace algorithm will be the structural information part. While geometric search deals only with like this very wide, generic kind, universal kind of construct that is like the [00:29:00] geometric space of of, of the vectors. That's my, that's my understanding.
Demetrios: I got this far in understanding it. Your epi plexity is constraining the space that you work with.
Lorenzo Moriondo: Exactly, because like, like in in physics, right? In uh, in, uh, in, um, in modern, not modern, but like in contemporary physics, uh, you know something only because you go and observe it, right? Yeah. So everything you see is limited by, by.
Lorenzo Moriondo: By your detector, so your eyes or your like, uh, andron lid, right? Uh, so anything is dependent of the server. So Plexy does the same thing for information. You say, look, we cannot just like, uh, it's, it's more like what like relativistic physics does, right? We don't look at the entire universe. We look at the, at the frame of reference.
Lorenzo Moriondo: So we have two bodies. They move at relative speeds, uh, to each other. And what happens depends on. Which observer you pick on and how fast this observer is going. Uh. In relation to the speed of light, right? Xi [00:30:00] Xi does the same, apply the same thing to information saying, look, you cannot compute like the entropy of, of the universe.
Lorenzo Moriondo: Yeah, you can, but it's not like a rule that applies to, it may be not a rule that do not apply to every observer in the universe because there are observer that go faster and not observer that goes lower. Right? So the same thing. There are observer that have a vast, uh, capacity for computing. Uh, and there are others that are like, uh, limited in terms of computing.
Lorenzo Moriondo: So for, for, for the observer, for one observer, it may happen. That's. Problem is un uncom computable. What? For another observer, it may happen that it is computable because he has, he has access to, uh, to more computing power. And maybe he can run the algorithm for more time because he lives longer or because like whatever other reason.
Lorenzo Moriondo: Right? Uh, so if Xi try to bound every algorithm inside the, what is called, uh, um, a, a model, um. I dunno, uh, I don't want to [00:31:00] get watching this day, like, um, a random work model uhhuh that is, that is bound to the observer. So you do this operation, you take the algorithm, you encapsulate this algorithm inside a random work that is, uh, observer, bounded, and then you compute the structural information that is generated by this bounded class between the observer and the, and the algorithm.
Lorenzo Moriondo: It's very, very, uh, it is grizzly interesting to me, so I, I hope it is for everybody.
Demetrios: When you bind it, then you're able to get a score on what the,
Lorenzo Moriondo: if you want, we, we can actually, we can actually take a look into, into a Jupiter notebook that I wrote exactly to explain this point in my, in my last paper. If you want, if we have time,
Demetrios: bust it out.
Demetrios: Yeah, I would
Lorenzo Moriondo: love that. You want, okay, I wish, I wish share it. No problem. So basically this is the paper and, uh, everybody can, can go take a look. I did, but if we, if we scroll here on the left, can you see it? Yes.
Demetrios: Yep.
Lorenzo Moriondo: Uh, there is like the notebooks, uh, directory [00:32:00] and there is like this zero, zero in this notebook.
Lorenzo Moriondo: Basically, I try to ex to explain to myself what Plexity is, and now it relates to my, to my research and is the, and is the object of my, of my latest paper. As you see outer space fi, space graph, Atlassian structural information. So the, the question was. Is it true that, uh, that arrowspace graph laplas encodes, uh, like encapsulate structural information as described by the by epi plexity?
Lorenzo Moriondo: Right? That's the main question. And if we go through all the, all the steps of these notebooks, it will find that yes. But very briefly to know like just, uh, what plexity measures. We can just take a look to like this. Um. Uh, these two, these two or three first steps. Basically what it does first, it computes the minimum length prob program to compute the graph laplas from the, [00:33:00] from the vector space.
Lorenzo Moriondo: It's a, it's, um, it's a measure of, uh. What is the minimal set of bits that transform a vector space into a graph? Laplas, basically it's a coog of, uh, measure for complexity. It's commo graph measures for complexity, right? So what is the minimal set of bits that maps my vector space to the graph laplas.
Lorenzo Moriondo: And that's the first part to compute the plexity. The second one, exactly. This year, the other pipeline as a prefix free program, right? You want to compute. The length of the arrowspace program. Uh, so yeah, there is like, it's a mathematical, um, construct that does this thing. Yeah. So
Demetrios: what
Lorenzo Moriondo: is the minimal number of bits to describe this program?
Lorenzo Moriondo: This program, basically, and that's the first step. This is all described in the Epi Plex, epi Plexity paper, published the 6th of January. You can go and look into that. The second step is, is the, is the, is the wrapping I was telling you before. [00:34:00] Uh, because we want, we have to make de Atlassian of this vector space into a laplas constraints.
Lorenzo Moriondo: Ian Mark of random field. That, that's it. It's already, I have probably already with the acronym myself, but I try to went through these things like line by line and try to understand it. So we do these things. We basically encapsulate this, uh, this system between the observer and the, and the algorithm inside this, this shell of, uh, of mark of chain, um, of mark chain model, right?
Demetrios: Yep.
Lorenzo Moriondo: And in the end, we run a test. How much these things, uh. They compose the original space and compress the original space because yeah, basically from the paper you can actually extract. There are three, uh, tests that tells you your plexity measure is correct. Here they are. One is the compression test, one is the spectral gap test, one is the downstream lift, uh, test.
Lorenzo Moriondo: If all these three test passes [00:35:00] you are measuring, your algorithm can be measured in terms of plexity.
Demetrios: And Plexity, just so I'm clear, is giving you more information on the randomness that you're getting from arrowspace.
Lorenzo Moriondo: It's totally different from entropy.
Demetrios: Okay.
Lorenzo Moriondo: It plexity basically adapts traditional information theory to what we discover with machine learning and neural networks.
Lorenzo Moriondo: Right. Like, uh. Traditional information theory told you that you cannot extract more information that is in the data because the the, it's just not there. Not there, right?
Demetrios: Yeah.
Lorenzo Moriondo: So it tells you that whatever you do with the data, you will get entropy, you will lose information, right? But we demonstrate that it's not true because we have algorithm now that they do generate, uh, information out of, out of the vectors, out of data in general.
Lorenzo Moriondo: I see generating new structures,
Demetrios: and this comes back to the whole idea of, hey, if we look [00:36:00] at the metadata and we find trends in that, that actually gives us a denser vector.
Lorenzo Moriondo: Exactly. We, we will get information that is not in the vector space. Mm-hmm. So plexy basically demonstrate that we can generate additional information from existing information.
Lorenzo Moriondo: Uhhuh. So not, it's not, it's not that every, every data space, every dataset, uh, is just like, uh, um. Whatever you do with it, you will going to lose the information you have because you're going to compress it. You lose information. You're going to the composite. You lose information. Say no, look. Now we have algorithms that if you, if you take a bounded observer to the algorithm, you can measure that these things generate information that it's just like, just like a thermal loss.
Lorenzo Moriondo: There is not just like, uh, randomness taking over. You can actually do the. This is connected to, to me, what is, what are called the inverse problems, right?
intro: Mm-hmm.
Lorenzo Moriondo: When you have like, um, a noise image and you want to deno them. [00:37:00] This is called an inverse problem. You are generating information that is not there by looking at the relation of the pixel.
Lorenzo Moriondo: You can generate non generate, but you can extract information that is hidden there by using algorithms, right? That's the same principle. Whenever you do the noising, you generate information out of something that is not supposed to have that information. Yeah. So when you do like super resolution, for example, of images, you're doing this thing, right?
Lorenzo Moriondo: You go from uh, 200, uh, 720 p to, uh, 10 24 Ps, right? You, you, you super res uh, you apply super resolution to, to the image and that's exactly what a plexity measure it measure. How much information is generated by the algorithm? No. How much generation is consumed by the algorithm and they demonstrated mathematically?
Lorenzo Moriondo: I mean, it's very, it's a very young, obviously, framework, so, uh, it's not as established and, uh, it needs to be like double check and tested. And I guess this is [00:38:00] the first algorithm that test itself against the, so arrowspace is the first algorithm that. On top of which I computed, we computed, um, information generation, the amount of information generated by, by the, by the computation, uh, computational process.
Lorenzo Moriondo: Uh, yeah. And that's, that's something that say, oh, wow, I connected some dots. And that's, and that was the thing. And that's actually, that's the very latest thing. But if you look back to my previous, uh, papers, there is the work, uh, the, the stepwise, uh, staircase stairway that brings you from a simple like, uh, search, uh, in a vector space kind of algorithm to a more general algorithm.
Lorenzo Moriondo: Because at this point, as, as it is in the, in the abstract, basically outer space. Is generic enough to [00:39:00] provide, I mean, graph Atlassian applied by arrowspace in the feature space is generic enough to provide, uh, a good approximation or, and good results for searching, classification, and only detection, diffusion, dimensionalized reduction, and data evaluation.
Lorenzo Moriondo: All this obviously. My idea is that, okay, but this is super cool. We we need to use this for like, for LLMs and, and machine learning operation, right?
Demetrios: Yeah.
Lorenzo Moriondo: Because once you have that evaluation, you have classification, you have search. You have a new set of tools that you can, that you can, uh, implement with this, right?
Lorenzo Moriondo: And that's what graph wiring is about. Uh, what do we do with this? Super nice, super new, cool tools provided by a plexity, uh, to actually supervise, manage, uh, curate data sets for machine learning operations and large language model operation. And, uh. And you got some answers about this, but my, the [00:40:00] paper that, that deals about, uh, applications and machine learning application and is the previous one, it talk, that talks very extensively how these tools can be used for, uh, in, uh, in um, uh, AIOps or MLOps pipelines.
Demetrios: Well, it does feel like you are doing this with data sets after. The model has been created. Have you also thought about trying to go for the dataset that the models are being created on?
Lorenzo Moriondo: You mean the the bad things model?
Demetrios: Yeah, so the training data.
Lorenzo Moriondo: Yeah, exactly. These things. There is a big question marks in whatever we do with lms, uh, recent, um, currently because everything is connected to how we do bad things.
Lorenzo Moriondo: That's why, uh, that's why it's very important to, and there is like, there are now like teams and teams of engineers working only on how do we move from raw text to embeddings, right? It's, it's a field on its own. We have [00:41:00] models with 4 billion parameters only doing this at the moment. So that's one, one thing if we go instead of the.
Lorenzo Moriondo: On the numerical side, and we said, let's look into pure numerical data, right? Machine learning data, regressions, uh, trees, decision trees, and all this stuff we find, I found out, I mean, or my intuition is that, um, it really, really matters how we do Fisher Engineering. Because from, let's say we have like this mass amount of raw data, like coming from the large Andron lid or whatever other big machinery for physics or for like any other kind of measurement you have, right?
Lorenzo Moriondo: Obviously you don't run your machine learning models on your, the totality of this data we are talking about, uh. Thousands of terabytes, right? Uh, so what you do, you do some fisher engineering. So you run some models to reduce your data into some manageable workable. This, this is the same thing that happens with satellite data, right?
Lorenzo Moriondo: The image you see, uh, on [00:42:00] your screen from the satellite is just like, uh, a model of all the raw data that the, that the satellite collects and, and, and push down to earth, right? So it's really important how the people at nasa, at ISA or whatever else they, how they model the, these, um, these algorithms to actually make this, they make this data usable.
Lorenzo Moriondo: Uh, my intuition is that if we try to embed, um, in, in some way or in a better way, we do now the features relations. While we do fisher engineering for, for this data, we may have more powerful topological search downstream. So as your question is really relevant, because it really matters how you treat the data upstream.
Lorenzo Moriondo: So, but having, having, um, having arrowspace, you can indeed measure how much structured information your feature engineering generates. So you can compare if I generate these data sets using this [00:43:00] model. It produce downstream, uh, zero point something plexity, right? Structure, uh, um, structural information.
Lorenzo Moriondo: What happens if the same data we, I use a different Fisher engineering model and I compare this gen, oh, look, this generate 1% more structural information. You see that these things can work upstream and downstream because my idea in the beginning started from, okay, let's apply this to. Uh, performances, large language model performances, right?
Lorenzo Moriondo: We can say, let's take the, the latent space and just like, uh, compute the Atlassian, the graph Atlassian on the latent space. That that's, that's what, uh, at Tropic they call, uh, uh, I guess, uh, mechanistic analysis. So they go and, and just like, uh, investigate, analyze the latent space to find where the, to the best tokens are generated in which subspace, that's exactly what you can do with, with the graph laplas.
Lorenzo Moriondo: That's downstream, but [00:44:00] upstream, you can do what we were talking before, like let's measure which best, which, which model does best feature engineering. So you see that this is like pretty much like a quiet kind of, um. Effective, uh, point of view in terms of, uh, what we can do with the data. How good is the data, uh, how this dataset will go if I add something, uh, what, uh, how, how, how will this dataset look like in six months for now and things like that
Demetrios: coming back around, I'm not sure I fully understood how this connects to memory with.
Demetrios: Agents,
Lorenzo Moriondo: okay. That the graph Laplacian is a permanent memory in the sense that it holds the, it holds the, in variance of the, of your context. So basically it, it tells you which pathways are possible from one feature to another. I mean, mathematician, um. [00:45:00] Describe this thing as a three dimensional space. So, for example, if you have like, uh, outliers in your, in your feature space, these outliers will be denoted with very high energy if you have like very connected kind of, uh, features.
Lorenzo Moriondo: This feature are denoted with the very low energy, right? So. The graph applies and basically describes all the path that you can take from a very connected, uh, feature to a loosely connected features. So it basically, it basically constrain the way you can do reasoning somehow inside, inside the, inside the, the feature space of the, of the vector space.
Demetrios: Okay. But we're not talking about agents remembering in the way that. Uh, it's like, oh, the agent understands that you like only vegetarian food, or
Lorenzo Moriondo: no, it is not about, uh, yeah, remembering sin. What they're called sin [00:46:00] or what they're called. Like they have different names for that, right? It's not, it's not about that.
Lorenzo Moriondo: It's a different kind of memory, more like permanent long-term memory of a given, uh, context. But it can also be applied to those because then if you have like a list of scene. Lit of scenes that, that your rugs, remember you can do the same thing because that's a vector space itself, right? So you can, you can have embed things of this, of this like, uh, context, memory, I don't know how you call it.
Lorenzo Moriondo: I don't remember how they call it, uhhuh. And then do the same operation on the context so you can actually build your p and memories of your context. Right? That's, but that's a different kind of memory. Indeed. It's not like this is more like. I, I think they called it, uh, transient memory or something like that.
Lorenzo Moriondo: What, what the, what the vector, uh, what the graph Laplacian describe or define is the long-term permanent relationships inside a context of documents that if you know it, it's a memory, right? Because that's exactly [00:47:00] what you want to remember. If, if you're an expert of that field, somehow you want to remember the in variance somehow, and then connects your in variance to, to your, to your application.
Demetrios: Yeah.
Lorenzo Moriondo: Or at least that that's, that's how I see the, the, the process itself.
Demetrios: Can you talk to me a minute about this idea that you follow on the discovery driven development? Because I think that's also pretty fascinating on understanding a little bit more on how you work and how you go through and test some of these ideas.
Lorenzo Moriondo: Absolutely. Yes. Thanks for the question because, uh, that's, I guess, uh, we have talked about the content and I think the content may be interesting to people and I hope so. But yeah, I mean, going through this I also understood a lot about, uh. What I'm really doing here, like what is my method, right? For, for doing this thing?
Lorenzo Moriondo: And um, my method is mostly based on [00:48:00] intuition. Like every, based on intuition that works great with large language models because that's what large language model lack. They don't have an understanding of the word that allows them to be intuitive, right? Because they're constrained by being, by their way of seeing the word, right?
Lorenzo Moriondo: They only see the word through language, through text. So they cannot build the level of intuition that humans can do because we have so many more senses. Right? So what I do is basically I try to leverage this very intuitive part of, uh, of what I do, and I try to, to inject this intuition into, um. Into, into the process of multi-agent, uh, kind of, uh, research, uh, processes, right?
Lorenzo Moriondo: So, uh, I work with different LLMs in parallel and I just try to collect the thing that, that they do, right? And that's just like try to inject my intuition to correct what may I think that when I think they're going a little bit out [00:49:00] of the scope of what I'm doing right. So, yeah, and this connects greatly with what happens in the meanwhile.
Lorenzo Moriondo: In the, in, uh, in scientific research because in the meanwhile, you have all these new papers published at, at the level that we have never seen before, right? So you cannot really work in your own like lockdown kind of, uh, uh, understanding of things. You always need to update this understanding of things very, Ian, if, if you, if you, if you want, right?
intro: Yeah.
Lorenzo Moriondo: Um, it's Ian in the process, but also into. Uh, listening to what happens outside. And so actually when I found the paper that connects, I instantly tried to build new intuition. Of what I'm doing based on this new paper. This happens with the AP Plexity paper. And this up, this happened also with another paper that was, uh, greatly, uh, impactful on me.
Lorenzo Moriondo: Uh, but that's a really long, another really long, uh, talk that, that I don't want. I mean, [00:50:00] I, I think, uh, maybe you don't have the time to start, but yeah, that's the, that's the process, right? You have your own ian process that goes on with your multiagent, uh, things, but sometimes something from outside happens that is very meaningful to what you're doing, and you cannot just keep going on without including what, what has happened at the, uh, in the mean.
Lorenzo Moriondo: Right? Yeah. I said. Okay. I, I build arrowspace. I build, um, what I call like these super crazy experiments called the topological transformer. There is a transformer architecture that works on spectral indexing, like in topological search instead of geome search. That is very like kind of my, my moonshot that, uh, is just like, uh, something I'm not following at the moment, but that in a experiment that I did, I say, okay, but what, what happens if, um, um, if I look at arrowspace in the framework of a plexity.
Lorenzo Moriondo: And, and all this new stuff came out, right? So you can see how the method somehow being so open to the outside, like putting in, uh, [00:51:00] in question, like asking question from things that comes from outside what you're doing, it's fundamental because it allows you to, to stay in touch with what's happening outside.
Lorenzo Moriondo: And to improve very much what you're doing. Yeah. It's a kind of scientific, science driven kind of development somehow in this sense.
Demetrios: And how do you go about updating these frames of references? Because I'm assuming you are constantly reading new papers, you're constantly trying to build new intuition, but it sounds like it's only occasionally that something will hit.
Lorenzo Moriondo: I have, I have GitHub repos that are only text files with, uh, with um, uh, things like, uh, text that I've built using large language model. I said, okay, this is good. I put this in a text file inside this GI repository, I have a file system that basically builds, uh. Day by day on top of this thing, [00:52:00] and I have a bunch of text files that I didn't open anymore, but they're still there.
Lorenzo Moriondo: But maybe there are like those, those three or four text files for each directory that instead branched out into something else. So, I mean, I could go through to the entire history of what I've done because I have the entire file system of what I've done since like March last year. Uh uh.
Demetrios: Wow.
Lorenzo Moriondo: And yeah.
Lorenzo Moriondo: And this contains the props and the answers. So in theory, if I want to do a meta research of what I'm doing, things that I have no time of doing at the moment because of these, uh, all the other things I, I try to carry on. I could actually do some kind, or which prompt worked best or which answer was the most impactful in, in what I've done, like, uh.
Lorenzo Moriondo: In the, in the months, uh, after, right. And all these kind of things. Yeah. But it's just like basic prompt, engineering, prompt, prompt, architecting. I guess it's not, it's nothing very anything. Very, very super fun. [00:53:00]
Demetrios: There was one spicy question that I wanted to ask you if you inevitably have tried to play with vector search versus just the giving an agent a tool, especially like a coding agent, just being like, Hey, gre.
Demetrios: Seeing what the differences are in that. How do you compare those two? When do you think to reach for one versus the other, if at all?
Lorenzo Moriondo: Uh, yeah. Basically what grip does is what is called lexical search, right? That is also what BM 25 does. It basically just count the instances of the, um, uh, how many times the word or the concept of the, uh, how is it called, or the, um.
Lorenzo Moriondo: The seed of the word is present, right? That's called lexical search. That is part of geometric search. Now we have all of this like, uh, uh, wave of irid search in the sense that they mix geometric search with lexical search, right? So what you're [00:54:00] doing is still like geometric search because you define your.
Lorenzo Moriondo: In the geometric search, you define your context by distance, uh, among vectors. In the Lexi search, you do a more, a more like statistical kind of things. You count basically the statistical characteristics of the words in the context. That is a pretty well good way of establishing a context because the kind of words, the, the class of words.
Lorenzo Moriondo: The families of words you have in the, in a, in, in a text is ex context, like the definition, somehow definition of context. The problem is how you, you, you measure these distances, so like, okay, you have like the, this, this word appears 50 times in, uh, in 1000 words, right? Does it mean that, that this word defines this context?
Lorenzo Moriondo: Right? So you do, okay, let, let's add some geometric search, some like, uh, distance, uh, cosign L two search to this. And with these two things, you can tell us [00:55:00] better which context belongs, uh, that documents belongs to. But then you go back to, to the original problem. Like running vector search, geometric search on, on that, on that vectors you build, right?
Lorenzo Moriondo: So you go back that you lose all the semantic semantical part that the features, uh, analysis brings in. ~So, I dunno if I answer your question. I mean, uh, but that's the, that's like the, my understanding of it. ~So yeah, that, that's, that's lexical search, that's lexical analysis. Yes. Yes.
Demetrios: Lorenzo, this has been great, man.
Demetrios: I appreciate you coming on here.
Lorenzo Moriondo: Thanks a lot, dam matters. Thanks a lot for what you do in the community and uh, it's great to have people that can work so well, like building up, uh, communities and, and, and products. So it has been great, uh, talking to [00:56:00] you.
