A conversation with Stephen Hooper
In this article, we continue the discussion of AI in design and manufacturing software with Stephen Hooper, VP of software development for Autodesk’s Design and Manufacturing division. Part two can be found here.
Engineering.com: I’ve anticipated levels of automation for various design software AIs — like SAE levels that classify autonomous-driving capabilities and Level 5 indicating full autonomy. Design software with such capability would get a prompt such as, ‘Hey AI, design a car’ and would design and builds a car. Level 0 is where we’re now. We design and building everything. The geometry is a little smart but mostly dumb. In between, levels abound. At the first level might be what Mike Haley of Autodesk talked about — a natural language UI. That might be the low hanging fruit. That would eliminate the dependance on traditional icon-based menu-based system.
Hooper: Some vendors say, and some startups have tried this. You’ll see a lot of these new startups where this text-based input leads to maybe a skateboard. It’s a little naive to believe that we could do much more than that for a couple of reasons. Let’s use 2D graphics as an example. Let’s suppose I write a prompt that creates an image of a dimly lit nighttime street scene in San Francisco. It’s a back street with neon lights, and there’s a car parked on the curb on the sidewalk. AI: create that image for me. It will accurately create that image for you. The trouble the large language model can get the same prompt three times and yield three different results. With a specific idea in mind, you’re going to have to start to expand the prompt. You’re going to have to say “I want a green neon sign, and I want the green neon sign to say Al’s Bar and I want the Al’s Bar to be six feet off the floor on the right-hand side of the image. And the car should be a Chevy pickup truck. And make it red. The problem is that for a precise output, the prompt will be so big and take so long to define that one may as well create the image manually. This is true with parametrics, too. If I say I draw me a flat plate that is 200 by 400 mil and it has six equally spaced holes in the middle of a flat plate and those equally spaced holes are going to be drilled with six mil diameter all the way through. It’s almost faster for me to draw a rectangle, put the holes in and dimension it. I think a pure text-based product that delivers a whole product definition is highly unlikely. I expect we will move towards what we would call a multimodal prompt by which one may provide an equation for the performance characteristics of the product. An engineer might provide some hand sketches, a little bit of a text description, and a spreadsheet that includes some of the standard parts to be used. I would call that a prompt package that is multimodal. You’d give back to an AI that’s able to accept multimodal input. From that it would derive a range of options with which to interact, edit and refine procedurally to get to the target output. There might be some things one can have produced purely from a prompt — for an M5 screw with a pitch of 1.5, for example. But to get to a product definition, it’s going to be much harder.
Engineering.com: There may be certain things that I’m used to doing, certain shapes that I’m used to using, or certain components. What if the AI could anticipate them? Say I’m a bike designer and in the habit of using round tubes. Could AI sense from the line I am drawing that it will be tube and start drawing a tube? Can it use the shapes I am familiar with? That’s what I’d call Design Assist rather than fully automatic design.
Hooper: I think at the moment people’s mental model of this is that it’s static and asynchronous. I think for it to be truly useful; it will be interactive and synchronous. With bicycle example, one may draw a layout sketch, and it comes back with 16 options. One could say, “I like that option.” It’s not actually right now so I’m going to tweak it a little bit and then it’s going to come back and say “Okay, based on how you’ve tweaked it, I’m going to optimize it so one can make it with carbon fiber in a mold.
Engineering.com: That’s been my frustration with what’s has been provided so far. We’re engineers and one gave us generative design. Generative design is going to start from scratch and give us, excuse the term, garbage geometry. An experienced bike designer would want to start with tubular construction. A structural engineer may want to start modeling with I-beams. Not globs. We’re not going to use that.
Hooper: There’ll be some elements that are deterministic and other elements that can be created. The cross sections for steel structure are going to be 100% deterministic. It could be 50 by 50 by 2.5 box section or an action or a W-150 I-beam. Those will be deterministic. Then, again, we’ll have that multimodal input. one might say to the system, here are the different types of steel members that I want to use. Then one might give it a rough line sketch to say I want a structure that is three meters high in this kind of format. It will take the sketch and the list of standard content that one want to use and produce the structure for you.
Engineering.com: That is what I would call Design Assist. It’s going to use shapes and parts I’m comfortable with what I’ve already found to be optimum or standard and start using those things. If I’m making a wall, I don’t want to have to draw the two by fours. If I’m creating a commercial building I don’t want to draw the I-beams. I don’t want to use blobs. Let me use round tubes. AI can help me figure out where the connections between the round tubes should be. What is the optimum configuration of the round tubes for maximum strength and minimal weight?
By the way, no one has taken me up on my bike challenge, designing a bike frame that is better than the standard diamond shape made with tubes. Excuse my impatience, Stephen. I know one guys are trying hard. You’re putting a lot of stuff into the CAD software. This is me saying after one part of the house is redesigned, It looks great but what about the rest of it? Why can’t we do this? Honestly, I love that Autodesk isn’t making me annotated drawings. That’s great.
Hooper: point on levels. I would suggest levels that come after that. The level that comes after that would be multidisciplinary. Now, you’re looking at a 3D model or someone using Cadence is looking at a printed circuit board. There are different AIs and different domain disciplines. An AI that can get into a multidisciplinary model would be ideal. Beyond that, into systems architecture. Now I can generatively produce a systems architecture for a product. Then I’m not going to need to do a detailed design. I’m going to look at the interaction. I’m going to have some black box for the software — some black box for the transmission, the suspension, another black box for the electronics. We can build the systems architecture generatively and then at the next level from systems architecture, then being able to generatively produce the actual details in each of the disciplines. Then I think we’ll get to a generative AI design platform.
Engineering.com: Okay, but don’t give me blobs.
Hooper: I agree — no generative design. Only in the sense of historical generative design, a generative AI platform for design.
Engineering.com: That annotation item and the CNC AI mentioned earlier sound excellent.
Hooper: At Level 1, we have a design check and at level two we eliminate the non-value-added tasks.
Engineering.com: To remove what we don’t want to deal with — because engineers hate to annotate.
Hooper: Level three is the design assist; level four is multidisciplinary; level 5 is systems level and architecture; level 6 is the complete product definition.
Engineering.com: I’ll be taking a stab at establishing those levels. I’ll share them with you. We’ve been hearing companies say they’ve got AI and I think how much? A standard with levels would let everyone see if they are at level one or two.
Hooper: We’re also being secretive, because there may be things that they may be things we’re working on that we don’t want to talk about.
Engineering.com: I thought so but one have told me about Fusion 360 having automatic annotation. Is that public information?
Hooper: The annotations in Fusion will be live in the product soon. That’s public, but there may be other things that we’re working on with Mike Haley that are secret.