MLOps Community
+00:00 GMT

What does a Machine Learning Engineer at DPG Media Do?

What does a Machine Learning Engineer at DPG Media Do?
# Machine learning
# Machine Learning Engineer

This is a project for the MLOps Community to fully understand what Machine Learning Engineers do at their jobs

October 13, 2022
Demetrios Brinkmann
Demetrios Brinkmann
Demetrios Brinkmann
Demetrios Brinkmann
What does a Machine Learning Engineer at DPG Media Do?

This is a project for the MLOps Community to fully understand what Machine Learning Engineers do at their jobs. We want to find out what your day-to-day looks like from the most granular to the most mundane, please tell us everything! This is our chance to bring clarity around the different parts of MLOps ranging from big companies to small start-ups. This is the fouth edition in this series. Find the first three here, here, and here.

Jeffrey Luppes MLE at DPG Media


Company:DPG Media
Years in the game: 6
Years specifically working on ML: 5
Direct reports: 0
Github | Twitter | LinkedIn

Jeffrey Luppes Machine Learning Engineer at DPG media

What was your path into Machine Learning?

Full disclosure: I originally dropped out of AI. In 2011, I was halfway through an Artificial Intelligence bachelor’s degree but felt like I could never “just sit down”, open my laptop, and code the cool AI things I was learning about. I hated it and became disillusioned with the field. On a whim, I dropped out and started a Software Engineering BSc. In the Netherlands that meant almost starting from scratch because none of the credits transferred.

In Dutch we have a saying that translates to that you can’t hide what you like doing, eventually, your passion will re-surface. It took a couple of years, but I eventually got back into AI/ML.

More to the point, in 2014 I was doing an undergrad doing an internship as a software engineer at a large academic hospital. A couple of the projects I did revolve around combining text databases so there would not be as many databases to update simultaneously. That got me excited about Natural Language Processing. Additionally, we had data coming in for things like ultrasounds and results from blood tests. The hospital was simply processing that data for care and not doing much aside from that. In fact, most of the data we processed went to a printer and was given to the hospital staff completely outside of any computer system. It might have been relatively secure, but I felt like there was a lot of potential going to waste. One of the business analytics interns mentioned that I should look at going for a career in Data Science.

I got another internship at a research institute and after graduating, landed a job with them as what we’d probably call a data engineer. I felt like I couldn’t compete in this research environment just on my Software Engineering roots alone. So I went back to university for a 2-year data science master’s. That worked out pretty well, although initially, it was a humbling experience. I remember installing CUDA drivers for an entire day, then debugging Chainer and TensorFlow code late at night, struggling to understand what exactly tensors were. Holy shit.

Afterward my MSc I started in a consulting company as a machine learning engineer and data scientist, which exposed me to a lot of different organizations. I got familiar with the Google Cloud Platform and it wasn’t a bad place or time to be an ML Engineer. After a while, I moved on to where I am now, DPG Media. I am currently a machine learning engineer on a data platform team that supports a number of companies (brands) within the larger media company.

What interests you about your current position?

I like how diverse it is. We have a bunch of different brands that all have their unique data problems and propositions, it’s almost like being a consultant inside a company. The tech stack we use is also a great plus I have a lot of responsibilities and flexibility.

What are some things that drive you crazy about your position?

Being a central team we often publish our models as an API the brands can talk with. There is often considerable time spent between us deploying a production endpoint and our business finally using the endpoint. This can be months where we essentially hear nothing, get no feedback on our models, but will still maintain an entire ML pipeline.

Another issue is that we have a ton of stakeholders to manage because every brand will have a number of teams focusing on parts of their product. Even if we’d just talk to the leads and POs, then that’s still about 5 or 6 people per brand on average, and we have 13 brands we currently support.

What does your company do?

DPG Media is a large media company that operates in Belgium, The Netherlands, and Denmark. We do conventional media things like newspapers, tv and radio channels, and magazines, but also online communities and marketplaces. In total, we have around 90 brands. We employ around 6000 people in total.

As far as media companies go I think we’re fairly neutral, maybe a little bit left-leaning. Most newspapers tend to focus on quality journalism and investigative journalism. Our team does very little with these though, as we only focus on the online services of the portfolio.

What is your team responsible for?

My team specifically works on our online brands, which include two job boards (like local versions of Indeed / LinkedIn), a number of marketplaces for used cars, several websites focused on consumer electronics, online communities, and a portal where users can compare and purchase things like insurance. Most of them are top 50/100 websites in the Netherlands. A lot of these brands would fall into the target group for the “MLOps at a reasonable scale” blog series. A good model for them could potentially bring them a significant revenue boost, but it won’t be billions. In essence, we’re a data platform team.

We’re a mixed team consisting of 4 data engineers, two data scientists, two specialists for web data, and finally me as the sole ML Engineer. Everyone is at a mid to senior+ level. We handle almost all data and ML issues for the thirteen brands. In practice, the lines are fairly blurred. The Data Scientists were specifically recruited for their coding skills. Sometimes I wonder if we truly have our own separate lanes.

What are some use cases you have with ML?

We do a lot of projects for our job platform websites. Things like classifying a job ad (is this a job posting for a data scientist, or is it for data engineers?), extracting information from them (Is this job remote? What skills are required?) and recommendations (based on your behavior, we think you like this job!). Similarly, we want to also parse a resume automatically whenever you upload it to a job platform so you don’t have to manually enter information on the next page. I don’t think any of us would bother with a system like that.

More classical use cases for ML that you’d expect from an online company include forecasting and audience targeting.

Lastly we recently also do a lot with content classification. Our websites and communities produce a lot of content (e.g. news articles about a new TV model) and we need to connect these with the right advertisements because we don’t want to track our users to determine which ads we want to serve them. In the past, there’s been a significant pushback on this by users of one of our tech communities. We’ve turned it around and now base the ads based on the content of the page. That way we still serve relevant ads, but without tracking.

For a lot of these platforms, it more or less goes that they make money through advertisements. For the job platforms specifically, there’s significant value in connecting a job seeker with a job. If the platform is lousy users won’t apply on it, and if there are no applications through the platform we don’t deliver on our promise to people posting jobs. The match is important for both sides. I’d say ML is critical to stay competitive in this day and age.

What projects are you working on in the next 6 months?

We have a pilot where we want to do something with Graph Neural Networks and skills that we can extract from job postings and resumes. I have no idea yet how that is going to live once deployed. Other than that we want to improve our current models that we have in production. We had two data science interns who did well and we’d like to continue their work.

At the same time, I want to reduce the technical debt that we created when we started the team and simply didn’t have a clear idea of what we were going for. I want to formalize the infrastructure more and increase our monitoring and alerting coverage. Right now things are coming in mainly via Slack, but I want to reduce the number of nonsensical alerts and then add OpsGenie to the mix.

What tech do you touch on a daily basis? For what?

I do a lot with terraform and it has been a massive quality of life upgrade for deploying things on AWS while maintaining the same syntax for other platforms.

Python and AWS are my bread and butter. We use a lot of Lambda functions, ECS, Sagemaker, s3, and DynamoDB. As for non-AWS tech, we also use Snowflake, Docker, dbt, Airflow, MLFlow, Prometheus, and Grafana. We have some legacy projects running on Databricks.

Actual machine learning relies a lot on TensorFlow and Huggingface. We run our training jobs on Sagemaker, but kick it off with Airflow.

Remember that our team actually only touches a dozen out of 90 or so brands. Beyond our team, there are others that mainly use Seldon Core or Databricks. Most other data/ml teams are focused on a particular product or domain and tailor their tools to their use case. DPG is really big in recommender systems, especially for news and video content.

On a more architectural level, DPG has adapted the data mesh approach with regard to data architecture. For Data Engineering, this more or less translates to each of the brands being responsible for their own data sources and products. For almost all brands it’s doable to have their own data/software engineers to own these (they’re big enough), but Data Scientists and machine learning engineers remain rare across the organization. As a result, we maintain most of the ml products for our brands.

What are your primary responsibilities as an MLE?

I architected the MLOps platform we have, and am also responsible for its continued development. I also make sure the models we deploy can handle the workload, are accessible and don’t cost too much (looking at you, Sagemaker). I also develop models, especially when there’s NLP involved, but I mainly support data scientists.

I keep up to date with the field and try to spot new opportunities within the brands, but not as much as a data scientist would do. For the past half year, I also supervised an intern working on a research project in job taxonomies.

What do your days consist of?

It can vary wildly, some days I’m basically entirely in meetings, others I have essentially the entire day to dedicate to bug tickets. As a rough estimate, about 40 to 50% of my time goes to meetings. Sometimes these are architectural discussions, sometimes they’re about data science projects we want to pick up, and others are meeting with other teams in the brands to see what they’re working on and if we can do something with them. We do a bit of knowledge sharing where we take rounds in presenting papers or past projects, and I run an AWS certification study group.

We have daily standups early in the day and run two-week sprints. Usually, I start work about half an hour before the standup so I’m already caffeinated by the time I have to talk to people. I am not a morning person.

I also supervised an intern, which takes about 5% of my time – say one or two hours per week.

There are bugs that happen, naturally, but because we’re a relatively new team we don’t have a lot of legacy code around. I can dedicate most of my remaining time to the development of new features.

What was the last bug you smashed?

One of our language models basically exists to provide embeddings (vectors) for words in the HR domain. We opted for a domain-specific model because the language used in job postings doesn’t really match with how other public models were trained (e.g. models trained on Wikipedia, common crawl data). It’s a model we intended to never be retrained unless absolutely necessary. Think of this as a super fancy word2vec model. It has multiple downstream applications that consume the vectors.

The model has trained on over 13 million job postings and covers about 1 million words in Dutch and English. You’d think that would be enough, but we were receiving a disproportionate number of errors when the model was asked for the vector of a word it had not been trained with and none of the fall-back methods worked. Further investigation uncovered a number of jobs that were either misspelled, really old names for current jobs or creating new job titles. We’re talking about jobs that appeared once over the past ten years.

The problem is that with new words is that we don’t actually have any data to retrain a model with until that job pops up often enough. For now, I corrected the misspellings by checking what word was closest in terms of Levenshtein Distance. There is some logic to doing this already with the deployed model, but it is a slow process. For the remainder, I tried to match the “funny” new/old words to synonyms or a word or phrase that I felt covered the word well enough and simply added their vectors to the model. It might have been a low-tech option, but it was a pragmatic solution. It’s better than retraining the entire model.

I think it shows that language is simply always evolving. We’re pondering a more robust (and automated) approach but I felt it was a cool bug to share.

What are you most proud of in your current position?

I don’t have anything I’m particularly proud of. I’m happy with where we are with the platform and that it’s fit for purpose, but we can always improve it.

What did you expect the job to be like vs reality?

When I interviewed for this job I thought I could focus primarily on ML Engineering. In reality, I spend my first couple of months making data science proofs of concept to prove to our business so that we could prove the value of AI and ML.

I also thought that there would be a lot already set up and the knowns would by far outnumber the unknowns. In truth, it felt like no one in the organization had a good idea about MLOps and even all of the other teams were constantly evaluating and adding new tools to their stack. I interviewed a couple of key engineers and architects and the only real advice was to not make the same mistakes they had. Even in an organization that had been doing ML for years no one really had my answers. We ended up building an MLOps platform that’s a hybrid between managed and “made from scratch” options, aimed to minimize the overhead we would have as a central team but still be flexible enough for all of our brands.

What are some things you enjoy most about your current position

The variety. No week has so far been the same.

What kind of metrics do you follow closely?

On the lowest level, I religiously keep track of how many models are deployed with the platform and how many invocations they receive. On a higher level, we track the usual things like Click-Through-Rates (CTR). For the job sites, our most important metric is perhaps how many applications a job posting that someone submits to our portal actually receives. Ultimately though, this data is with the brands. Being a central team we can’t own these parts of the chain but we try to be involved. Sometimes we advise on what metrics they should track.

What do you wish you knew before getting into Machine Learning?

When I went back to university it was a couple of years after the infamous “Data Scientist, the sexiest job of the 21st century” article. Universities were catching on with the hype and so was their recruitment. I thought I’d save the world with some kind of cool model. In reality, ML is really just one small piece of the puzzle in a much bigger system. No one told me I’d have to write unit tests!

Any random stories from the job?

Since February we’ve had two interns on our team. According to one of them, our lunch breaks are sacred times. I think about that quote a lot. Is this a good thing? A bad thing? Do we take too long or do we need better lunches?

Who do you admire?

I look up to people like Kenneth Reitz, Chip Huyen, and Vincent Warmerdam – people who have shaped the world for the better through sharing knowledge and/or open source software. I admire the energy they put into their work and the output they manage to maintain. Also, they all seem to have a good sense of humor.

Where do you want to take your career next and why?

In truth, I’m not quite sure. I like the variety my current job brings, but we’re always working on multiple projects for multiple stakeholders at once. It always seems to limit the depth we can achieve. A while ago one of the projects we pitched and made a proof of concept for turned into a new start-up company for DPG Media. It does sort of hurt when something I could only spend a couple of weeks on is taking flight without me, but it’s been fun to see it take off, and it’s in good hands now.

Maybe I’d eventually like to start my own company for this before retiring to a lighthouse along the north sea coast.

What advice do you have for someone starting now?

Learn Infrastructure as Code. It’s great to have a tool like terraform available to you as an ML Engineer, and employers do recognize that.

I feel that we put too much emphasis on knowing the deep inner workings of machine learning. Often you need to prove something works first, with limited time and (computing) resources. Knowing how to achieve “some good enough” result that shows the potential is an invaluable skill. There are relatively few jobs inventing new algorithms.

For ML Engineers or Data Scientists just starting out and looking for a project to go on their resume, I’d focus on a project that shows some end-to-end solution. It doesn’t have to be great, it doesn’t have to be brilliant, but showing that you know how to achieve a solution is a great way to demonstrate your skills and something to talk about in interviews.


If you would like to take part in this series to let others know what your job consists of please reach out! Thank you to Jeffrey for sharing his story, give him a follow on LinkedIn, github and Twitter for more!

Dive in
Related
Blog
What Does James Lamb – A Machine Learning Engineer Do?
By Demetrios Brinkmann • Sep 15th, 2022 Views 275
Blog
What Does James Lamb – A Machine Learning Engineer Do?
By Demetrios Brinkmann • Sep 15th, 2022 Views 275
Blog
What does a Machine Learning Engineer at Etsy Do?
By Demetrios Brinkmann • Sep 22nd, 2022 Views 326
Blog
What does Frata do? A Machine Learning Engineer tells all.
By Demetrios Brinkmann • Sep 1st, 2022 Views 255
Blog
When machine learning meets privacy
By Demetrios Brinkmann • Jun 21st, 2021 Views 179