Inside Realtor.com’s AI-Powered Image Tagging Service

An interview with Software Engineer Sara Marlowe about how the team built the image tagging AI tool for real estate platforms.

Written by Taylor Rose
Published on Mar. 25, 2025
Photo: Shutterstock
Photo: Shutterstock
Brand Studio Logo

What’s the difference between a “family room” and the “living room?”

What about a “den” versus an “office?” 

Identifying the nuances in home features is one of the challenges that real estate agents face when they list a property online. When a real estate agent posts photos of a listing, they often use tags to note the features of the property — things like swimming pool, deck, backyard and so on. The often regional differences between the names of home features can cause a misalignment between what someone is searching for and what the realtor tagged.  And when there are thousands of listings, the data can become a muddled mess.  

So when Sara Marlowe, a software engineer, spent time building an on-demand image tagging service for Realtor.com, she made a big difference for real estate agents, sellers, buyers and other engineers. 

Built In spoke with Marlowe about what went into the technical build for the AI-driven application. 

 

Sara Marlowe
Software Engineer • Realtor.com

Realtor.com is an open real estate marketplace built for everyone.

 

What project are you most excited to work on in 2025? What is particularly compelling about this work for you?

I'm excited to deliver Realtor.com's first on-demand image tagging service for MLS partners in 2025. This groundbreaking initiative features our first GPU-powered real-time API, leveraging OpenAI's CLIP and Google's ViT models to generate highly accurate and descriptive tags for real estate images — a significant improvement over our current system. 

I was deeply involved in building this from scratch, driving every phase: requirements gathering, model evaluation, Kubernetes-based infrastructure design and API implementation. Tackling challenges like GPU optimization, library compatibility and low-latency inference was both demanding and rewarding. 

This project modernized our tech stack, reducing maintenance overhead and setting the stage for future ML initiatives. The new tagger supports more tags with superior accuracy and performance. It's a cornerstone in Realtor.com's strategy to productionalize ML models efficiently, setting a precedent for faster go-to-market timelines for real-time inference applications.

 

What does the roadmap for this project look like? 

The roadmap for this project has been structured into several phases: requirements gathering, proof-of-concept evaluations, architectural design, implementation and iterative testing. 

We're now in the final stage focused on optimizations and production deployments. This project has truly been a cross-functional effort, requiring close collaboration with the machine learning team to fine-tune the CLIP and ViT models, the Industry Tools team to ensure seamless integration with MLS partner systems and the CI/CD platform team (Skyway) to streamline deployment processes. 

Content services played a critical role in defining and validating business requirements. A big challenge was coordinating priorities and bandwidth across stakeholders. To address this, I organized regular syncs, maintained a dedicated Slack channel and created a Confluence space to ensure transparency and alignment. 

On the technical side, implementing a GPU-accelerated real-time API introduced complexities around resource allocation, latency optimization and dependency management. We tackled these through collaborative problem-solving sessions, pair programming and iterative testing.

 

What in your past projects, education or work history best prepares you to tackle this project? What do you hope to learn from this work to apply in the future? 

Over my four years at Realtor.com, I've worked on various projects within content engineering, but this initiative stands out as the most technically ambitious and cross-functional effort I've led. My background in software engineering and experience with cloud-native technologies provided a strong foundation for tackling the infrastructure and deployment challenges. My prior work on data-intensive applications helped me understand the nuances of optimizing performance and resource utilization. 

This project has been a tremendous learning experience in the realm of machine learning operations. I gained hands-on experience with model deployment, GPU optimization and real-time inference, which are critical skills for AI-driven applications. I developed a deeper understanding of the trade-offs involved in infrastructure decisions, such as balancing cost, performance and scalability. I'm excited to apply these learnings to future projects especially those involving real-time machine learning and AI. This project has expanded my technical expertise and honed my ability to manage complex, cross-team initiatives; skills that will benefit me in driving innovation at Realtor.com.

 

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies.