Quotes
If you make a lot of people more productive, that accelerates progress overall.
Whatever dimension you look at starting from hardware and capabilities to scale of a number of developers, applications, and different domains which within Facebook obviously within the industry gets applied.
As I mentioned, it's a community project, it's not on the Facebook Meta project. I feel that if you work with the community, you can collaborate very actively with people from academia, industry, and different hardware vendors. It feels pretty exciting and very different almost like a running mini startup project.
The MLOps space grew so much. There's a lot of exciting opportunities and a lot of problems to solve.
Always operate in product mode. When people are pushing the boundary, people can unlock their ideas, unblock themselves, try to create new trends, and fundamentally push efforts forward so definitely there's this mindset of product building mindset.
It's really important to make a modeler happy. This is what fundamentally drives the innovation of AI so far and this trend hasn't been deaccelerating so far.
Basically, if you look at the state of models over the years, there is a lot of incremental improvements, a number of papers published absolutely crazy, all the benchmarks have been pushed but also new revolution has been coming.
AI is basically an Applied Science field. The only way to progress is to increase the federation and try new ideas. If you try new ideas faster, you end up with more progress on average.
In the early days, pushing the modeler first was really crucial because that's where the innovation space happens to flow from research to production.
As some techniques become more mature and widely applicable then it makes sense to upstream and absorb them into play PyTorch package itself. They can be more easily accessible to a broader audience.
ML space is still very early which means that it's very fragmented or partitioned, so making integration easier is actually often the highest leverage thing to removing friction from the other side. That's one big theme which in general the Pytorch community is trying to enable.
It's easy to get something working and then making it fast incrementally afterward. I think while still staying in pretty much the same environment, that when this kind of pivotal PyTorch as a framework serving both instead of trying to export.
War stories are like the constant flow of new ideas, models, and pipelines between 2 worlds and you should not think of it as a mountain to another mountain to build different solutions.
People are trying to use so many different hardware platforms that are popping up and it's also like a second order of that which is how to enable different hardware vendors to build solutions where they can do this optimization whether you have a creative or latest one.
I would imagine that more and more people are doing ML as a part of their presence, especially in products that have benefited from this that expands quite a bit so more and more regular engineers should learn this space.
Our job is to make the best tools to enable this innovation process both for folks that are pretty far trying to advance ideas pushing MLOps to be flexible and also for people who are just getting into this data-driven mindset by providing a higher-level abstraction.