Part 2 of a conversation with Trimble’s Director of AI, Karoliina Torttila.
We met with Karolinna at Trimble Dimensions 2023. This is Part 2 of our conversation. Part 1 is here.
With all the pressure to implement AI, your to-do list must be a mile long. I’m sure everyone, from your CEO to product managers, is making requests.
We built a team around serving our customer problems specific to construction. I’m cherishing the environment we have created. We’re all here to solve problems in the industry we care about. There’s a lot of internal efficiency as well. AI is not something for one team to handle; it’s something for all of us. Let’s all get our hands dirty. There’s an accessibility aspect of AI after the last magnificent year that should impact all of us in our professional lives—and personal lives with our home renovation projects, for example. The entirety of Trimble is behind the effort. My team specifically focuses on what has the highest impact on customers’ construction problems. A lot of customers are working on marketing optimization, text summarization, legal things, supply chains and whatnot. There are not many companies in the world that are trying to solve engineering problems or contractor problems. We have a responsibility to pay attention to them, our customers.
What about AI’s role in new design vs. retrofit?
Several of our customers are shifting from construction into retrofit. In the UK, 80 percent of the buildings that they expect to be occupied in 2050 have already been built. Bridges and roads? Also already built. The AI story, the design story, extends into our geospatial portfolio. We might think the design starts with SketchUp, say, in creating a building, but that’s not the reality for a lot of our customers. On the physical reality side, Trimble has done well with sensors and scanners. They are amazing. But we’ve done less of a good job with dealing with the data, the richness of data that these devices produce.
Who wants to deal with point clouds? Nobody.
With roads and highways, can we enable automated asset inventory creation that the DOTs and engineering companies are after?
One of the exciting projects is one that is Department of Energy funded and led by National Renewable Energy Laboratory for energy-saving retrofits of residential homes in the U.S. on a large scale. A lot of multifamily residential homes are not terribly energy efficient. The biggest efficiency gain is by adding wall insulation. Can we tackle that with external cladding? The current process is cutting and bending the panels, cutting out the windows and doors and so forth. It’s not very efficient. Very labor intensive when we have a labor shortage. Here is a beautiful example of an end-to-end workflow where we take our scanners, do scans of these buildings, then use AI to extract the facade components. We extract the windows, the vents, gutters and produce the dimensions of the facade to send to the panel manufacturers for offsite manufacturing. Then someone comes on-site to install them. It’s really nice to see public money going into this and it’s really nice for us to be part of it and make a difference.
What’s your target? Do you want to recognize objects in the point cloud? That is a need. Also, there is a need to recognize objects from a camera view.
The concept is there. We’re further developing this. Looking at each of the points and asking, “Is this a window point? Is this a facade point?” That technology has really only matured over the last couple of years, which is why we haven’t seen it pushed out by our competitors. It’s so new.
Whatever is generating the data, whether it’s point clouds or it’s a camera … the point cloud is the worst. A LiDAR scan is very high resolution. You might have a thousand points and the software has to recognize them as a part or a feature.
A little bit selfishly from a Trimble perspective: We love the scanners that generate both point clouds and images. You get the dimensionality from the point clouds. Then if you have 2D images, the AI has a lot to work with. Then we can say, “This looks like this” or “This looks like that” and “And by the way, here’s a window from this manufacturer with double glazing.”
Interesting. What allows AI to make the association of points with products? Is it a roomful of people who are tagging pictures with text?
No. And here is the most beautiful part. You are talking about annotation and labeling. That’s called supervised learning. But the image similarity that you saw in the keynote and that we’ve pushed out in the 3D Warehouse, that’s unsupervised learning. With unsupervised learning, you just need the datasets. You don’t need to say what everything is. That’s the beauty of how AI has evolved over the last couple of years. It is moving from having to be so specific. AI is now able to, all by itself, recognize what is similar. There’s no labeling; there’s no annotation. That’s unsupervised learning.
The reason why ChatGPT or the image generation services like Midjourney or DALL-E work is they scour the Internet for those images and captions and throw all of it into this vector space. Without getting too technical, they basically throw these pairs into a matrix. In that matrix is a dog, the picture of the dogs and the text of a dog. They’re very close to each other in that matrix. When another picture of a dog comes in, it gravitates to that cluster of the dog pictures. Then there’s a cat and the cat is a little bit further away and makes its own cluster in the matrix. And then there’s picture of a chair and it is much further away in this matrix because it looks very different. Unsupervised learning is having these pairs in a big, beautiful matrix space. That’s why we don’t have to do the annotation. With unsupervised learning, AI develops those relationships between those things by itself. This is really exciting!
AI has changed quite a bit in just a couple of years. I’m glad we are no longer tagging cats and dogs. One final question. Do you feel like you’ve been given enough resources for this huge job of implementing AI? How big is your team?
It’s a little bit of a complicated answer. Of course, we’re trying to solve big industrial problems. And so, any team will be too small of a team. That said, we can’t have teams of 2,000 people either, right? I don’t think technology that’s developing this fast can necessarily be developed in massive, hierarchical companies. I believe in a hub and spoke strategy. We have all parts of Trimble that are developing their AI expertise. We have brilliant counterparts within the SketchUp team, in the Tekla team, the Quadri team.… Where we see the commonalities, we go in and say, “Let’s use AI here.”
I don’t want a big team. It’s a massive challenge. That’s just a Trimble thing. We work with partners, our friends. This is more of a partner play.