Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ Thu, 31 Oct 2024 03:57:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ 32 32 Optimize Part Procurement & Generate Savings https://www.engineering.com/resources/optimize-part-procurement-generate-savings/ Mon, 14 Oct 2024 18:06:55 +0000 https://www.engineering.com/?post_type=resources&p=133431 Turn your existing process into data-driven collaborative sourcing to decrease complexities and costs and minimize compromises between program margin and time-to-market.

The post Optimize Part Procurement & Generate Savings appeared first on Engineering.com.

]]>

Reorganizations, mergers, and innovation are opportunities to create new product parts, thus increasing costs to design, manufacture, test, source, and store the parts.

In addition, as a result of global disruptions, manufacturers report that supplier costs are rising, deliveries are delayed and suppliers are less reliable/predictable. This can drain revenue and profits.

Enhancing collaboration between Engineering and Procurement can help companies address these challenges.

Watch our webinar to see how NETVIBES solutions powered by data science and AI generate part procurement savings.

  • Uncover the synergies between Engineering and Sourcing with the Dassault Systèmes 3DEXPERIENCE platform.
  • Learn how to streamline your new component sourcing by leveraging all of your data with 3D and artificial intelligence technologies.

 

This on-demand webinar is sponsored by Dassault Systèmes.

The post Optimize Part Procurement & Generate Savings appeared first on Engineering.com.

]]>
How quantum computing is already changing manufacturing https://www.engineering.com/how-quantum-computing-is-already-changing-manufacturing/ Tue, 24 Sep 2024 13:23:55 +0000 https://www.engineering.com/?p=132147 The prospects for hybrid quantum optimization algorithms in the manufacturing industry are particularly promising.

The post How quantum computing is already changing manufacturing appeared first on Engineering.com.

]]>
A laser setup for cooling, controlling, entangling individual molecules at Princeton University. (Image: National Science Foundation/Photo by Richard Soden, Department of Physics, Princeton University).

Various industries are becoming increasingly aware of the potential of quantum technology and the prospects for the manufacturing industry are particularly promising. There are already quantum algorithms being used for specific manufacturing tasks. These are hybrid algorithms that combine quantum calculations and conventional computing—in particular high-performance computing. As the first benefits of quantum technology are already being realized today, it’s worthwhile for companies to familiarize themselves with the technology now.

Where quantum algorithms fit manufacturing

To find suitable use cases, it’s helpful to know about one of the most popular hybrid algorithms: the Quantum Approximate Optimization Algorithm (QAOA). QAOA is considered a variational algorithm and is used to find solutions to optimization problems. A Variational Quantum Algorithm (VQA) is an algorithm based on the variational method, which involves a series of educated guesses performed by a quantum computer refined by classical optimizers until an approximate solution is found. This iterative process combines classic computers with quantum computers, allowing companies to access the benefits of quantum computing more quickly, rather than waiting for technological breakthroughs that may not happen for several years. 

Hybrid quantum algorithms open creative possibilities for challenges in manufacturing. For example, it will be possible to develop new, better materials by simulating the interaction of molecules more reliably and quickly. Classical computers already struggle to simulate simple molecules correctly. Since quantum computers can explore several possible paths simultaneously, they are better able to calculate complex interactions and dependencies. This reduces the cost and time required to research and produce innovative materials – which is particularly promising for the development of better batteries for electric cars.

Quantum calculations can also make a difference in logistics and inventory management, where, the “traveling salesman problem” is a recurring challenge: what is the shortest route to visit a list of locations exactly once and then return to the starting point? When solving this type of problem, quantum computers are significantly faster than traditional systems. Even with just eight locations, a traditional computer needs more than 40,000 steps, where a quantum computer solves in 200 steps. Those who firmly integrate such calculations into work processes will be able to save a lot of time and resources.

The situation is similar for supply chains. Maintaining one’s supply chain despite geopolitical upheavals is increasingly becoming a hurdle for the manufacturing industry. Remaining flexible is easier said than done, as changing suppliers can quickly lead to delays in the workflow. Although most manufacturers have contingency plans and replacement suppliers at the ready, the market is convoluted. Huge amounts of data must be considered to find the cost-optimal and efficient supply chain. Quantum algorithms can handle this and allow ad hoc queries of this kind, which is a decisive advantage in volatile situations.

Approaching the quantum advantage

Hybrid quantum algorithms can be used in a variety of ways. Volkswagen, for example, found a use case in the application of car paint and was able to optimize this process. It was possible to reduce the amount of paint used and speed up the application process at the same time. 

Some practices help manufacturers to enter quantum computing via hybrid quantum algorithms. Although the full quantum advantage will only unfold in the future, awareness of the technology’s potential is important today. Now is the best time to actively engage with quantum computing and identify industry-specific use cases. This makes it possible to estimate the complexity of the problems and the computing power required. This in turn makes it easier to estimate when the right hardware might be available.

Once suitable application scenarios have been found, there is no need to wait for the ideal quantum hardware. Instead, manufacturers should try their hand at a simplified program for a specific scenario and combine the latest quantum technology with conventional systems. At best, this hybrid approach can achieve a proof of concept and realize tangible improvements – Volkswagen is a good example of this.

It’s also important to note that it’s usually not necessary to learn programming for quantum computing at the machine language level. There are already higher-level programming languages that are less abstract and complex and therefore easier to learn. The market also has platforms that represent quantum-based applications via graphical user interfaces. These can help development teams show these applications to other departments and make them easier to understand. It’s advisable to focus on platforms that are cloud-based and agnostic in terms of hardware. It’s currently still unclear which hardware will prevail in quantum computing. Flexibility is therefore particularly valuable to minimize conversion costs, which can be incurred with on-premise installations.

A strategic investment

Even with the most innovative technologies, big changes don’t happen overnight. While we will see leaps towards the full quantum advantage, it will also take time to be fully applicable. The bottom line is that those who have prepared themselves earlier will be able to utilize the quantum advantage sooner. The transition to quantum computing can be a challenge if not enough groundwork has been done. A smooth transition is possible if employees are trained in the use and maintenance of quantum systems.

The introduction of hybrid quantum algorithms is also strategically valuable due to potential patent applications. Only an early discovery of industry-specific quantum applications allows manufacturers to quickly fill their portfolio and legally secure this intellectual property.

Erik Garcell is head of technical marketing at Classiq Technologies, a developer of quantum software. He has a doctorate in physics from the University of Rochester and a master’s in technical entrepreneurship and management from Rochester’s Simon School of Business.

The post How quantum computing is already changing manufacturing appeared first on Engineering.com.

]]>
Manufacturing AI will struggle without focus https://www.engineering.com/manufacturing-ai-will-struggle-without-focus/ Wed, 11 Sep 2024 16:13:21 +0000 https://www.engineering.com/?p=131790 New study says lack of direction a key headwind in manufacturing adoption of AI.

The post Manufacturing AI will struggle without focus appeared first on Engineering.com.

]]>

Artificial intelligence is not only widely anticipated but is expected to dramatically change the manufacturing landscape worldwide, forever.

The promise is huge, but to deliver on that promise, manufacturers need to develop coherent strategies for implementation, and more importantly, understand where the use cases exist for AI implementation.

New research from AI software provider IFS suggests that American firms are sceptical of artificial intelligence in its current form.

* * *
Access all episodes of This Week in Engineering on engineering.com TV along with all of our other series.

The post Manufacturing AI will struggle without focus appeared first on Engineering.com.

]]>
To get value from AI, size matters—but not the way you think https://www.engineering.com/to-get-value-from-ai-size-matters-but-not-the-way-you-think/ Tue, 27 Aug 2024 14:23:07 +0000 https://www.engineering.com/?p=131160 Large companies can make huge gains from a solid AI implementation, but smaller firms can make an impact faster.

The post To get value from AI, size matters—but not the way you think appeared first on Engineering.com.

]]>
(Image: Allie Systems)

This scene plays out every day across American manufacturing: in a sprawling factory, the hum of machinery produces a metallic symphony that can only be created by the tools of advanced industrial progress. Yet, amid the whir of automated systems, state-of-the-art robots and small autonomous vehicles gliding across the floor to their intended destination, a surprising scene unfolds: a worker stationed in the middle of the shop floor diligently recording real time production data on a chalkboard.

This paradox—a high-tech factory still reliant on humanity’s oldest method of data recording—illustrates a broader issue within the manufacturing sector. Despite substantial investments in advanced software, machinery and automation, many factories lag in digitalization, missing out on the efficiencies that digital transformation and the latest AI technology are poised to deliver.

Alex Sandoval, founder and CEO of Allie Systems, a manufacturing AI developer based in Mexico, has witnessed this scenario numerous times. His company specializes in developing autonomous manufacturing AI agents and aims to redefine how enterprise-scale manufacturers interact with the latest technology. The mission is to bridge the gap between high-tech machinery and the outdated data practices still in use, even in many factories that would rightfully be considered smart manufacturing facilities.

Decoding manufacturing AI

Allie’s AI approach is to inject another dose of intelligence into that already smart manufacturing environment.

“Manufacturing involves numerous machines and computers generating vast amounts of data,” Sandoval explains. “Our role is to tap into that data, which is often underutilized, and turn it into actionable insights.”

Allie AI connects to the various machines within a production facility, gathering data on production quality, process variables and machine health. This data is then used to train an AI agent—essentially a highly specialized software system that becomes an expert in the specific operations of a factory. This AI agent can predict and identify issues, suggest improvements and even take action autonomously.

The challenge of digitalization

Despite the allure of advanced technology, the manufacturing sector faces significant hurdles. Sandoval highlights the irony of modern factories with cutting-edge robotics still relying on outdated methods of data collection. “I visited a top beverage manufacturer, and while their production line was fully automated, they had workers manually recording data,” he says. “It’s a stark reminder that automation does not automatically equal digitalization.”

This disconnect between sophisticated equipment and data management systems creates inefficiencies. The promise of AI in manufacturing lies in its ability to analyze vast amounts of data and provide real-time solutions, but many companies have yet to implement the type of cohesive digital strategy required to effectively train an AI agent.

Big versus small

Allie AI’s clients are predominantly large, enterprise-scale companies across the Americas. These are giants in industries like food and beverage, where the cost of inefficiency can be astronomical. Sandoval’s company has 35 full-time employees and an additional 40 implementation engineers who visit factories to physically install and connect the data gathering systems. This on-the-ground approach is crucial for understanding and optimizing complex manufacturing processes.

However, smaller companies also stand to benefit from AI. According to Sandoval, mid-sized firms have the advantage of agility while potentially valuable projects could lie dormant for months at large enterprises. “While large companies can leverage more data, they often face bureaucratic delays,” he explains. “Smaller firms can move quickly, implementing solutions with fewer obstacles and adapting faster.”

Hype or reality

The promise of AI has faced it’s fair share of skepticism, with some arguing that it has yet to deliver significant value. Sandoval counters this view by pointing to the enormous potential for AI in manufacturing. “Globally, the manufacturing sector wastes trillions of dollars due to inefficiencies,” he notes. “AI can address these issues, but only if applied to the right problems.”

He highlights a real-world example from a cement company in Mexico, where downtime of a single oven can cost $550,000 per hour. By using AI to predict and prevent such issues, companies can achieve substantial cost savings and operational improvements.

It’s unlikely the benefits for smaller organizations will carry a similar dollar value. But the speed of implementation combined with a flatter organizational structure means the benefits, though smaller, will likely be realized much faster and with just as great an impact.

Looking ahead, Sandoval joins the chorus of voices that envision a future where AI becomes integral to manufacturing. “We’re moving towards a time where factories will be managed by AI agents that handle the bulk of operational decisions,” he says. “These agents will analyze data, suggest optimizations and even implement changes autonomously.” Sandoval also acknowledges the human element in this technological shift. The success of AI in manufacturing depends on upskilling the workforce and overcoming resistance to new technologies. “Training and adapting are crucial,” Sandoval emphasizes. “As AI evolves, so must our approach to managing and integrating it into our systems.”

The post To get value from AI, size matters—but not the way you think appeared first on Engineering.com.

]]>
Why AEC firms should embrace AI https://www.engineering.com/why-aec-firms-should-embrace-ai/ Wed, 21 Aug 2024 20:21:28 +0000 https://www.engineering.com/?p=131080 AI becoming vital for being competitive in the AEC industry

The post Why AEC firms should embrace AI appeared first on Engineering.com.

]]>

For engineering firms, artificial intelligence (AI)- driven tools and other intelligent technologies are more than just a novelty or a luxury; they’re a near-imperative to keep pace in today’s highly competitive business environment, according to a newly released benchmarking report for the architecture, engineering, and construction (AEC) industries.

Findings from the 2024 edition of the AEC Inspire Report from Unanet, the business software company for which I serve as executive vice president for AEC, underscores just how important it is for firms to integrate technologies like AI across their operations, from business development to project execution to strategic planning. “One thing is certain,” the report asserts. “tech-advanced [AEC] firms that can harness the full potential of emerging technologies are the ones best positioned to accelerate growth, overcome challenges, and navigate the unknown. Such companies are not only operating for today; they are prepared for tomorrow.”

Based on survey responses collected this past spring from more than 330 senior-level AEC executives, the report (available for free download here) provides a revealing look at the trends, best practices, strategic priorities, and other dynamics shaping these three industries. It gives engineering firms the means to measure themselves against their peers across the industry.

AEC findings

The results highlight a strong sense of optimism across the AEC industries and an increasingly clear business case for firms to embrace technologies like AI. For example:

  • Most AEC firms feel good about the current business environment. A large share — 86% — of respondents hold an optimistic business outlook, and 42% say they’re “very optimistic.”
  • A winning business climate. Most firms, 58%, report a proposal win rate of more than 50%, while a much larger share, 72%, project a win rate above 50% for the year ahead, another sign of growing optimism. Those most confident in their future are firms that leverage technology because they are more likely to have keen insights into all aspects of their company’s resources, projects, and pipelines. These firms are better positioned to weather challenges and economic unpredictability while having greater confidence in their ability to forecast their business and manage resources.
  • Despite a generally positive outlook, 39% of AEC firms are concerned about the economy. Operational efficiency and talent recruiting and retention are other issues that are particularly concerning.
  • M&A (merger and acquisition) is on the menu, especially on the buy side. Half of surveyed AEC firms say acquisitions are of interest to their company in the year ahead, while just 5% are interested sellers. Among engineering firms, 40% say they’re interested buyers.

On the technology front, “it may be tempting to stay the course, to tackle change in slow increments,” states the report, “but this approach will not serve for much longer.”

Close to half of AEC firms — 48% — qualify as “tech-advanced” because they meet

at least three of the following criteria:

  • Data-driven, regularly using data for business management, decision-making, and performance assessment.
  • Cloud-dominant, with more than 50% of tools and applications based in the cloud.
  • Fully integrated, with complete integration of platforms and applications across all systems.
  • AI-mature, as active users of AI with comprehensive firm-wide policies and procedures in place to guide and govern AI usage.

More than half of AEC firms are using AI to some extent, while another one-third are open to using it but are not currently doing so. Our report reveals a strong business case for firms to implement AI:

  • Close to one-third of firms — 31% — are using AI with policies and guidelines in place as guardrails. However, 26% use AI without formal oversight policies, unnecessarily inviting legal, security, and compliance risks.
  • Architecture firms are twice as resistant to implementing AI as construction and engineering firms.
  • AI-mature firms are much more prolific project proposal producers, averaging 263 per year compared to 144 for less AI-savvy firms. They also win more projects and expect higher future win rates than less AI-savvy firms.

To deliver these kinds of benefits, AI requires firms to establish a strong foundation that includes not only internal policies to guide AI usage but also robust employee training on AI and high-quality data, underpinned by clear data stewardship policies. The report states, “Organizational data governance is foundational to AI implementation, and AI implementation is a must in today’s data-driven reality.”

Findings Specific to Engineering Firms

Engineering firms show deep concern about the current state of their workforce. Compared to their counterparts in architecture and construction, engineering firms struggle more with recruiting and more frequently list recruiting as a top human resource challenge. Although they share the AEC industry’s overall sense of business optimism, the workforce issue is pressing enough for many to turn down work for want of labor. As the report notes, firms can attract and retain talent by offering employees access to cutting-edge technology in their day-to-day work and by partnering with local colleges and trade schools.

A lack of sophisticated forecasting practices exacerbates the talent shortfall. Engineering firms most frequently rely on Excel spreadsheets to forecast labor resources and are less likely to be able to predict their growth rate. Troublingly, one-third of engineering firms say they cannot project their growth for the coming year.

Engineering firms also appear deliberate in adopting AI and supporting AI policies. Less than one-quarter of those we surveyed said they’re using AI with policy guardrails in place. As for the areas in which they expect to realize the most benefit from using AI, data analysis and content generation top the list.

Just how important are AI and digital technologies generally to success? For engineering firms, the report concludes, “Technological transformation is essential to maintaining competitive footing and operational resilience in the face of a growing talent shortage.”

About the author

Akshay Mahajan is Executive Vice President, AEC, at Unanet, a company that creates business software solutions for architecture, engineering and construction firms, and government contractors. For more information, visit https://unanet.com/.

The post Why AEC firms should embrace AI appeared first on Engineering.com.

]]>
Part 3: How will Autodesk use AI for product design? https://www.engineering.com/part-3-how-will-autodesk-use-ai-for-product-design/ Tue, 20 Aug 2024 23:01:04 +0000 https://www.engineering.com/?p=131005 A conversation with Stephen Hooper

The post Part 3: How will Autodesk use AI for product design? appeared first on Engineering.com.

]]>
Stephen Hooper is vice president of software development, design and manufacturing at Autodesk.

In this article, we continue the discussion of AI in design and manufacturing software with Stephen Hooper, VP of software development for Autodesk’s Design and Manufacturing division. Part two can be found here.

Engineering.com: I’ve anticipated levels of automation for various design software AIs — like SAE levels that classify autonomous-driving capabilities and Level 5 indicating full autonomy. Design software with such capability would get a prompt such as, ‘Hey AI, design a car’ and would design and builds a car. Level 0 is where we’re now. We design and building everything. The geometry is a little smart but mostly dumb. In between, levels abound. At the first level might be what Mike Haley of Autodesk talked about — a natural language UI. That might be the low hanging fruit. That would eliminate the dependance on traditional icon-based menu-based system.

Hooper: Some vendors say, and some startups have tried this. You’ll see a lot of these new startups where this text-based input leads to maybe a skateboard. It’s a little naive to believe that we could do much more than that for a couple of reasons. Let’s use 2D graphics as an example. Let’s suppose I write a prompt that creates an image of a dimly lit nighttime street scene in San Francisco. It’s a back street with neon lights, and there’s a car parked on the curb on the sidewalk. AI: create that image for me. It will accurately create that image for you. The trouble the large language model can get the same prompt three times and yield three different results. With a specific idea in mind, you’re going to have to start to expand the prompt. You’re going to have to say “I want a green neon sign, and I want the green neon sign to say Al’s Bar and I want the Al’s Bar to be six feet off the floor on the right-hand side of the image. And the car should be a Chevy pickup truck. And make it red. The problem is that for a precise output, the prompt will be so big and take so long to define that one may as well create the image manually. This is true with parametrics, too. If I say I draw me a flat plate that is 200 by 400 mil and it has six equally spaced holes in the middle of a flat plate and those equally spaced holes are going to be drilled with six mil diameter all the way through. It’s almost faster for me to draw a rectangle, put the holes in and dimension it. I think a pure text-based product that delivers a whole product definition is highly unlikely. I expect we will move towards what we would call a multimodal prompt by which one may provide an equation for the performance characteristics of the product. An engineer might provide some hand sketches, a little bit of a text description, and a spreadsheet that includes some of the standard parts to be used. I would call that a prompt package that is multimodal. You’d give back to an AI that’s able to accept multimodal input. From that it would derive a range of options with which to interact, edit and refine procedurally to get to the target output. There might be some things one can have produced purely from a prompt — for an M5 screw with a pitch of 1.5, for example. But to get to a product definition, it’s going to be much harder.

Engineering.com: There may be certain things that I’m used to doing, certain shapes that I’m used to using, or certain components. What if the AI could anticipate them? Say I’m a bike designer and in the habit of using round tubes. Could AI sense from the line I am drawing that it will be tube and start drawing a tube? Can it use the shapes I am familiar with? That’s what I’d call Design Assist rather than fully automatic design.

Hooper: I think at the moment people’s mental model of this is that it’s static and asynchronous. I think for it to be truly useful; it will be interactive and synchronous. With bicycle example, one may draw a layout sketch, and it comes back with 16 options. One could say, “I like that option.” It’s not actually right now so I’m going to tweak it a little bit and then it’s going to come back and say “Okay, based on how you’ve tweaked it, I’m going to optimize it so one can make it with carbon fiber in a mold.

Engineering.com: That’s been my frustration with what’s has been provided so far. We’re engineers and one gave us generative design. Generative design is going to start from scratch and give us, excuse the term, garbage geometry. An experienced bike designer would want to start with tubular construction. A structural engineer may want to start modeling with I-beams. Not globs. We’re not going to use that.

Hooper: There’ll be some elements that are deterministic and other elements that can be created. The cross sections for steel structure are going to be 100% deterministic. It could be 50 by 50 by 2.5 box section or an action or a W-150 I-beam. Those will be deterministic. Then, again, we’ll have that multimodal input. one might say to the system, here are the different types of steel members that I want to use. Then one might give it a rough line sketch to say I want a structure that is three meters high in this kind of format. It will take the sketch and the list of standard content that one want to use and produce the structure for you.

Engineering.com: That is what I would call Design Assist. It’s going to use shapes and parts I’m comfortable with what I’ve already found to be optimum or standard and start using those things. If I’m making a wall, I don’t want to have to draw the two by fours. If I’m creating a commercial building I don’t want to draw the I-beams. I don’t want to use blobs. Let me use round tubes. AI can help me figure out where the connections between the round tubes should be. What is the optimum configuration of the round tubes for maximum strength and minimal weight?

By the way, no one has taken me up on my bike challenge, designing a bike frame that is better than the standard diamond shape made with tubes. Excuse my impatience, Stephen. I know one guys are trying hard. You’re putting a lot of stuff into the CAD software. This is me saying after one part of the house is redesigned, It looks great but what about the rest of it? Why can’t we do this? Honestly, I love that Autodesk isn’t making me annotated drawings. That’s great.

AI levels of automation suggested by Autodesk.

Hooper: point on levels. I would suggest levels that come after that. The level that comes after that would be multidisciplinary. Now, you’re looking at a 3D model or someone using Cadence is looking at a printed circuit board. There are different AIs and different domain disciplines. An AI that can get into a multidisciplinary model would be ideal. Beyond that, into systems architecture. Now I can generatively produce a systems architecture for a product. Then I’m not going to need to do a detailed design. I’m going to look at the interaction. I’m going to have some black box for the software — some black box for the transmission, the suspension, another black box for the electronics. We can build the systems architecture generatively and then at the next level from systems architecture, then being able to generatively produce the actual details in each of the disciplines. Then I think we’ll get to a generative AI design platform.

When AI goes bad. A blob-eye view of a bicycle frame. Note the chainring embedded in the blob. Image from video posted on Facebook.

Engineering.com: Okay, but don’t give me blobs.

Hooper: I agree — no generative design. Only in the sense of historical generative design, a generative AI platform for design.

Engineering.com: That annotation item and the CNC AI mentioned earlier sound excellent.

Hooper: At Level 1, we have a design check and at level two we eliminate the non-value-added tasks.

Engineering.com: To remove what we don’t want to deal with — because engineers hate to annotate.

Hooper: Level three is the design assist; level four is multidisciplinary; level 5 is systems level and architecture; level 6 is the complete product definition.

Engineering.com: I’ll be taking a stab at establishing those levels. I’ll share them with you. We’ve been hearing companies say they’ve got AI and I think how much? A standard with levels would let everyone see if they are at level one or two.

Hooper: We’re also being secretive, because there may be things that they may be things we’re working on that we don’t want to talk about.

Engineering.com: I thought so but one have told me about Fusion 360 having automatic annotation. Is that public information?

Hooper: The annotations in Fusion will be live in the product soon. That’s public, but there may be other things that we’re working on with Mike Haley that are secret.

The post Part 3: How will Autodesk use AI for product design? appeared first on Engineering.com.

]]>
Forget ChatGPT, the answer is in your own data https://www.engineering.com/forget-chatgpt-the-answer-is-in-your-own-data/ Tue, 20 Aug 2024 22:48:57 +0000 https://www.engineering.com/?p=131002 Accuris connects internal systems for a "customer digital enablement."

The post Forget ChatGPT, the answer is in your own data appeared first on Engineering.com.

]]>
Image: Accuris

We’ve all been swept up in the AI wave. We have tried all manner of large language models (LLMs, including the media favorite, ChatGPT) and found them all lacking in one way or another. One —Claude — claimed to be good at math but wasn’t. Most seem to be good at answering college-level English literature questions, but incapable of solving freshman physics problems. We have used them for help when researching articles (especially Perplexity). When we gave ChatGPT several engineering-type questions, it always had an answer — but not always correct ones.

Perhaps we should not expect LLMs to be know-it-alls. How could they be? They’re trained on data troves of information scraped from all who would supply it as well as publicly available information such as Wikipedia. While the collective size of the data at their disposal is staggering, the quality, accuracy and depth of their answers is all too often lacking. Plus, the data that may be most valuable to engineers — their own — is off limits to LLMs.

Any organization of a good size and long history will have a tremendous amount of information. Therin is a valuable history with documents, drawings, models, revisions, PLM databases … all those which compromise tribal knowledge.

Jeff Platon is vice president at marketing at Accuris. Image: LinkedIn

If this was a class on the failures of LLMs for engineers and the need for AIs to train on their own data, Jeff Platon, head of marketing at Accuris, would be in the front row and emphatically raising his hand.

“We totally got this,” he would say.

Accuris has a product called Goldfire, a semantic search application made to search an organization’s data.

Semantic search is the type of search that can sense context and the user’s meaning, as opposed to keyword search, which looks for exact word matches. By way of example, semantic search will find a different “best football player” in the U.S. than the UK, recognizing the correct sport in each location.

In the U.S., who bigger or with more history than the Navy? Who more to benefit from sematic search through vast repositories of data? Who is less likely to put their data on the cloud for training of civilian LLMs for their training? Instead, the military carefully guards its information from prying eyes, keeping it encrypted on secure servers.

Platon opens with a slide of an aircraft carrier. He comes from a Navy family. With the single biggest weapon system on display, a marvel of technology and an exemplar of operations in the most devastating environment possible — war — he has the attention.

Mining a company’s own data is what Goldfire is all about. Accuris does this by combing through all of it, indexing, linking … in short organizing and hyperlinking in a way to make information findable and usable.

The aircraft carrier was not just for show. The U.S. Navy is an Accuris customer and uses Accuris to help make design decisions.

NASA is another customer. Platon tells a story of helping NASA find information to avoid astronauts returning to Earth, splashing down in the ocean, but their capsule unable to right itself leading to a potentially “very bad outcome.” NASA scoured their information troves for information on inflatable bags used to successfully right reentry capsules for the Apollo mission. They found nothing. They called retired engineers to see if they had kept any information. The search went on for a year until finally they called in Accuris.

“Within 20 minutes of implementing the AI infrastructure, they found 249 documents that solved their problem, say Platon. They were able to fix the engineering problem. NASA saved $2 to $3M dollars and one to two more years of sorting through the data and organizing it.”

“We helped NASA find a solution to astronaut recovery,” says Platon.

Accuris connects internal systems for a “customer digital enablement.” Image: Accuris.

Accuris works with text documents and databases. It aims for a wholistic approach to all of an organization’s needs. By analyzing everything, it’s able to connect data islands of CAD and PLM to PLM, ERP, SQL databases as well as manufacturing and upstream operations such as procurement systems. Along with other solutions such as automated BOM reporting from a database of 1.2B parts and Supply Chain Intelligence, the ESDU knowledge base containing engineering design data, Engineering Workbench, an AI platform for standards, codes, and regulations management.

Putting it all together, Accuris should theoretically enable an organization to answer operational questions that span departments and disciplines. For example, if a part fails on a production line, how long it would take for the supply chain to replace it?

This is vital for China Plus One strategies, says Platon.

Accuris’ AI technology is already available for this purpose and in use by 900,000 design engineers, including many large companies, branches of the armed forces and defense contractors.

It may be Accuris’ worst kept secret.

“We don’t make a big deal of it,” says Platon modestly.

Lest we think Accuris is only military, Platon steers us towards green hydrogen, an energy source of such potential that people have stated that it will save the Earth, a technology in which Accuris is engaging.

Accuris may be better known as IHS (its previous owner) and as a publisher of millions of standards of which ANSI, SAE and AS are the best known. The company also has access to seven million technical articles and books and 108 million patents and patent applications.

That, all by itself, would be something for a neural network to feast on.

The post Forget ChatGPT, the answer is in your own data appeared first on Engineering.com.

]]>
The AI rendering revolution: “This makes designers cry” https://www.engineering.com/the-ai-rendering-revolution-this-makes-designers-cry/ Tue, 20 Aug 2024 18:07:02 +0000 https://www.engineering.com/?p=130987 Generative AI rendering is “almost too disruptive” according to the rendering veteran behind Depix Technologies.

The post The AI rendering revolution: “This makes designers cry” appeared first on Engineering.com.

]]>
When Philip Lunn co-founded Depix Technologies, he had one goal in mind: one-click rendering.

No more tedious setups, fiddling with virtual cameras and specifying shaders. No more setting the scene and getting the lighting just right. No more ray tracing. Just one click, and seconds later, a gorgeous render.

It wouldn’t work with traditional rendering techniques. But with generative AI, Depix claims to have made one-click rendering a reality.

“It’s so easy it’s almost frightening,” Lunn, CEO of Depix Technologies, told Engineering.com.

Here’s a look at the emerging world of AI rendering.

Generative rendering

Some applications of generative AI are obviously in their early stages. AI can write, but not well. It can create 3D models, but they’re painfully rough. AI can run simulations, but only in narrow cases.

Image generation is different. While AI images vary in quality, the best can be incredibly convincing. And even if you can tell that an image is AI-generated, it often doesn’t matter; it does the job well enough. (For a case in point, go to any blog and look at the thumbnail images.)

Given that rendering is nothing more than creating an image, it should come as no surprise that AI is now doing the job. And it’s doing it so well that Lunn, who’s been in the rendering software industry since 1997, calls it “revolutionary rendering.”

After seeing a demo of the process, it’s hard to disagree.

You don’t even need a CAD model to create a fantastic render: a screenshot of a CAD model will do the trick. So will a pencil sketch. Depix has developed an AI image generator specialized in product renders, with an image of your design as the prompt.

Take a look. The image pair below is from CADviz, one of Depix’s products, which takes a CAD screenshot (Solidworks, in this case, but it could be any CAD program) and outputs a product render:

Left: The original CAD screenshot. Right: The CADviz rendering. (Image: Depix Technologies.)

The user doesn’t have to specify any details about the render—no materials, no scene info, nothing. CADviz takes about 8 seconds to generate a render from a CAD screenshot.

Here are a few more examples using screenshots from Solidworks, SketchUp and Revit, respectively:

Examples of CADviz taking CAD screenshots (left) and rendering them (right). (Images: Depix Technologies.)

You can try CADviz yourself—it’s free, but limited. A paid version with more features will be available soon, according to Lunn, allowing high-resolution renders with control over the output.

That’s just the first turn of the rendering revolution.

How to make designers cry

Another Depix product, SceneShift, is “blowing up” in preview, according to Lunn. “Everybody wants it,” he says.

SceneShift merges text prompting with the crucial constraint of product rendering: shape preservation. Give it a product image, plus a text prompt for a new background, wait a few seconds, and you have a brand new render: new scene, new lighting, same product. Lunn boasts that the results are “better than any CG expert.”

Here’s an example of a SceneShifted car:

Examples of SceneShift creating new versions of the same product render based on different text prompts. (Images: Depix Technologies.)

Then there’s StyleDrive, another Depix tool that’s struck a chord with early testers. Or, as Lunn puts it: “This makes designers cry.”

In StyleDrive, users upload two images, a base image and a style image. The style image drives a change to the base image. For example:

In StyleDrive, the style image drives a change to the base image. (Image: Depix.)

There are no restrictions on what either image can be. A designer could sketch a car and apply the style of a diamond, as in this example:

StyleDrive takes an input base image (the sketch on the left) and applies a style image (the diamond on bottom) to generate a render (right). (Image: Depix Technologies.)

In the time it takes a designer to scribble on a napkin, plus 8 seconds, StyleDrive generates a completed render. Even if it’s not strictly photorealistic, Lunn views StyleDrive as a conceptualizing superpower.

“It can just spit out infinite variations,” Lunn said. “It’s a little disturbing, but exciting at the same time.”

Some are already excited about it. Lunn says Depix already has a number of paying customers in the automotive industry (though to respect their privacy, he declined to name any).

How to try AI rendering

Though Depix has initially focused on the automotive industry, its technology is not limited to vehicles. Depix’s AI model is based on Stable Diffusion, an open source model trained on 5 billion images, so it works with many subjects.

Here’s an example of SceneShift applied to a fashion model:

(Image: Depix Technologies.)

Lunn says users will be able to “personalize” the rendering AI with custom training data, up to 1000 images. They could include sketches, CAD models, renders or any other images that could tune the model in the desired direction. With the ability to specify the weight (or “influence”) of this training set, users will notice an appreciable difference in the output, according to Lunn.

Depix’s AI tools are available as individual APIs and together in an interface called Depix Design Lab. Right now, anybody can test the technology on Depix’s Discord server (warning: it’s addictive).

Someday Lunn hopes to see Depix tools integrated in professional software such as CAD and PLM. He says the company is having discussions with several developers, noting specifically that his team is working on a plug-in for Autodesk VRed.

“We’re taking the rendering world by storm,” Lunn says.

The inhabitants of that world may want to reach for an umbrella—or perhaps a lifeboat.

The post The AI rendering revolution: “This makes designers cry” appeared first on Engineering.com.

]]>
What are the risks of using AI in engineering? https://www.engineering.com/what-are-the-risks-of-using-ai-in-engineering/ Wed, 14 Aug 2024 14:06:35 +0000 https://www.engineering.com/?p=123143 New technologies mean new pitfalls. Here are the three biggest hazards of using AI in engineering.

The post What are the risks of using AI in engineering? appeared first on Engineering.com.

]]>
Artificial intelligence (AI) has been proliferating rapidly in recent years, driven by advancements in computing and statistical algorithms to make some truly stunning leaps forward. There’s a growing sentiment that AI will soon be everywhere, and engineering is no exception. The possibilities of what engineers could do with AI are tantalizing but, as with any new technology, the adoption of artificial intelligence carries with it inherent risks.

The substance of the risks naturally depends on how organizations choose to deploy AI, but in the specific context of engineering, there are three particular hazards of which all stakeholders should be aware.

#1 – Misinformation

Although the AI industry would prefer everyone to use the vaguer, more evocative term, ‘hallucinations’ the simple fact is that AI systems — particularly large language models (LLMs) — routinely generate falsehoods. There’s been no shortage of suggestions for how to overcome this tendency, whether it’s better training data, expert input or even just making the models larger and more complex.

Unfortunately, the inherent structure of LLMs and machine learning more generally means that the outputs of these systems are statistical inferences, which can never guaranteed to be truthful. Add to the fact that LLMs are fantastic confabulators (to use the polite term), and the potential for AI systems to misinform the engineers using them becomes a serious risk indeed.

#2 – Gray Work

The term ‘gray work’ refers to the ad-hoc solutions and workarounds users are forced to adopt when new technologies make their jobs more difficult, rather than easier. For example, imagine a manufacturer decides to add a new machine vision system to an existing production line in order to reduce the number of defects it produces. If the system requires workers to ensure that each product is positioned just so in order to function, the efficiency gained by its detecting more errors will be far outweighed by the extra efforts it requires from employees.

In the context of artificial intelligence, gray work typically involves reconciling data from disparate sources, including sensors, databases and various tools or applications. For example, an engineer working on a predictive maintenance program might need to collect data from designers, customers and the production line. If that predictive maintenance program’s output then becomes part of a larger quality report, the engineering team behind the report may find themselves spending more time sorting through data than doing any actual engineering.

#3 – Waste

Artificial intelligence is often celebrated for its efficiency, or more specifically for the potential efficiency gains it offers. However, the not-so-secret shame of AI is that it’s incredibly resource intensive to operate. Data centers – the required infrastructure for any cloud-based AI, which is most of them — have a high carbon footprint due to their significant electricity needs in addition to requiring copious volumes of water for cooling. Of course, this pertains to cloud-based design tools as well, not just AI, but what makes the latter more of a concern is the way it’s being used today.

Typically, machine learning models need to go through several generations of training before they become practically useful, which means that every new ML model represents the consumption of more water and electricity. Add that to the fact that many models are redundant copies created solely for commercial purposes and the risk of creating excess waste by using AI becomes fully apparent. Moreover, even if you’re not inclined to worry about resource consumption from the perspective of climate change, you should be aware that AI companies are taking it seriously by charging their customers more for each query they make.

The post What are the risks of using AI in engineering? appeared first on Engineering.com.

]]>
What is generative AI? https://www.engineering.com/what-is-generative-ai/ Mon, 12 Aug 2024 14:14:47 +0000 https://www.engineering.com/?p=104337 Here’s what engineers need to know about using generative models in design.

The post What is generative AI? appeared first on Engineering.com.

]]>
Generative artificial intelligence (AI), also known as GenAI, has been at the forefront of the AI boom that began in the early 2020s. However, as with much of the field of artificial intelligence, the basis for the technology is significantly older. It goes all the way back to Andrey Markov, the Russian mathematician who developed a stochastic process known as a Markov chain to model natural language.

However, while the theoretical basis of generative AI can be traced back to advances in statistics in the early 20th century, the technology that makes it possible is much more recent. This combination of complex mathematics and novel technology tends to obscure GenAI’s actual capabilities, resulting in underestimations or, more often, overestimations of what it can do. Hence, for engineers likely to see more and more references to generative artificial intelligence in the coming years, it’s worth settling some basic questions.

What’s the difference between generative AI and generative design?

There’s an unfortunate tendency among industry professionals to use ‘generative design’ and ‘generative AI’ interchangeably, largely driven by the current marketing hype surrounding artificial intelligence. However, there are important differences between these two terms, specifically in how their underlying mechanisms operate.

Generative design uses parametric rules to define and execute a procedure through iteration. For example, using generative design to create a load-bearing strut would involve setting the relevant limits on materials, dimensions, and so on, resulting in a variety of designs that can be further modified manually.

In contrast, generative AI uses statistical model weights for training and content generation. Using generative AI to create a load-bearing strut would involve training a model toward a defined goal using previous strut designs, resulting in one or more designs that would, ideally, combine the optimal features of previous efforts into something novel.

At the time of writing, generative design is far more common in engineering than generative AI, which has seen the most success in the areas of natural language processing and generating digital content, primarily images and text. Nevertheless, given the widespread interest and significant investment in generative AI across industries, it’s likely that engineers will see more of it in their jobs in the years to come.

How does generative AI work?

At its core, generative AI is about machine learning, specifically applying unsupervised or self-supervised learning to a data set. Broadly speaking, GenAI systems can be classified in terms of their modality and whether they’re unimodal (accepting only one type of input, such as text) or multimodal (accepting multiple types of input, such as images and text).

In each case, the system is trained on relevant examples in the appropriate modality—words, sentences, images, and so on—and, with enough examples, the system eventually recognizes patterns that enable it to discriminate between examples as well as generate its own. In the case of multimodal systems, these patterns include correlations between the various modalities that enable it to translate between them.

Originally, these results were generally achieved via a general adversarial network (GAN), in which one model attempts to create novel data based on its training data while another attempts to discriminate the generated data from the real thing. The better the former model performs, the closer the system’s outputs are to the desired results.

More recently, transformers have emerged as a popular alternative architecture to GANs. They’re primarily used for processing sequential data, such as in natural language processing, by using self-attention mechanisms to identify dependencies between items across the entire sequence. This approach has proven to be significantly more scalable, resulting in the rise of ChatGPT and related AI tools.

What can engineers do with generative AI?

As indicated above, generative AI is much less likely to be practically useful for engineering compared to generative design, at least in GenAI’s current state. However, there are already examples of generative AI being used to automate part of the 3D modeling process by converting 2D images into 3D models. One example, NeROIC, (Neural Rendering of Objects from Online Image Collections), uses a neural network to generate 3D models from online images of common objects. Given the rate at which this technology seems to be advancing, it’s not hard to imagine a whole host of GenAI tools that could improve CAD workflows.

Another potential application for generative AI in engineering is the development of synthetic data for simulation and validation. The advantage of synthetic data is that it can be an alternative to data produced by real-world events that may be rare and/or undesirable, such as natural disasters or catastrophic system failures. Of course, users must be wary of the potential for implicit biases and ensure that any synthetic data produced with GenAI is representative of the real-world data distribution.

Many other use cases for generative AI in engineering are likely to emerge as the technology continues to develop and scale. Automatically generating documentation or snippets of code and identifying anomalous patterns in production data for predictive maintenance and quality control are just a few of the potential engineering applications for generative AI.

The post What is generative AI? appeared first on Engineering.com.

]]>