Uncategorized - Engineering.com https://www.engineering.com/category/uncategorized/ Mon, 18 Nov 2024 19:47:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Uncategorized - Engineering.com https://www.engineering.com/category/uncategorized/ 32 32 What are input and output tokens in AI? https://www.engineering.com/what-are-input-and-output-tokens-in-ai/ Mon, 18 Nov 2024 19:47:50 +0000 https://www.engineering.com/?p=134111 This is how our interactions with AI are broken down into bits and bytes, and how pricing is defined.

The post What are input and output tokens in AI? appeared first on Engineering.com.

]]>

In the context of AI, particularly language models like GPT (Generative Pretrained Transformer), input tokens and output tokens refer to the units of text that the model processes and generates, respectively. These tokens are the building blocks that allow the model to interpret and generate language.

Input tokens

Input tokens are the pieces of text that you provide to the model as input. This could be a sentence, a question or any other kind of prompt the model needs to process.

When you enter text, the language model first breaks it down into smaller units called tokens. These tokens can be individual characters, words or sub-words, depending on the model’s tokenization process.

For example, the sentence “Hello, how are you?” might be broken down into several tokens, such as: “Hello”, “,”, “how”, “are”, “you”, “?”.

The model uses these tokens to understand the meaning and context of the input and generate a response.

Tokenization is typically done using a process called Byte Pair Encoding (BPE) or similar algorithms that aim to split text into the most efficient and meaningful pieces for the model.

Output tokens

Output tokens are the pieces of text that the model generates as a response to the input. After processing the input tokens, the model predicts the next most likely tokens to produce a coherent and contextually relevant output.

The model generates output tokens one at a time, predicting the next token based on the previous ones, until it reaches a predefined limit or completes the response.

For example, if the input is “What is the capital of France?”, the model might generate the output “The capital of France is Paris.” Each word or punctuation mark in this output is considered a token.

Tokens and model limitations

Language models like GPT have a token limit, which refers to the maximum number of tokens they can handle in a single input-output interaction. This limit includes both the input tokens and the output tokens. For example, if a model has a token limit of 4096 tokens, that means the total number of tokens in the input plus the output must not exceed that number.

If the input is too long, the model may truncate it or may not be able to generate a sufficiently long output.

Token limits vary between different models. For example, GPT-4 may handle up to 8,000 or 32,000 tokens in one prompt, depending on the version.

Why tokens matter

Tokenizing text into manageable pieces allows the model to process and generate language more efficiently. It also helps the model deal with the complexities of human language, such as word variations, sentence structures, and punctuation.

In many AI systems, the number of tokens processed can directly influence the cost of using the model, as AI services may charge based on the number of tokens in both the input and output.

Other modalities for handling inputs and outputs

Tokens might be the primary method language models like GPT use to handle inputs and outputs, but they are not the only method.

While large language models (such as GPT) focus on text-based tokens, AI systems can also handle other types of inputs and outputs beyond text tokens.

AI models like DALL·E, CLIP and Stable Diffusion handle images as inputs and outputs. In these cases, AI processes pixels or embeddings of images, rather than textual tokens. The input might be an image (for image recognition) or a text prompt that generates an image.

For speech recognition or text-to-speech models—such as Whisper or Tacotron—the input could be audio signals (converted into spectrograms or other representations) or text, and the output could be transcriptions of speech or spoken responses.

Video AI models process and generate sequences of frames, allowing for tasks like video analysis, generation and transformation.

Some AI models are designed to process structured data such as graphs, tables and databases. These models do not use tokens in the same way that text-based models do. For example, AI used in graph neural networks (GNNs) works with nodes and edges, and models that deal with tabular data (such as AutoML models) process features in a structured form.

Some advanced AI systems, like GPT-4 and CLIP, are multimodal, meaning they can handle both text and images. These models don’t always use tokens in the traditional sense but instead work with various embeddings (vector representations) of input data, like a combination of textual and visual features.

Is token-based pricing the only model for AI?

No, token-based pricing is not the only model used for pricing AI services, but it is the most common model for text-based AI models. The pricing model varies depending on the type of AI service, the complexity of the model, and the application. Here are some common pricing models for AI:

1. Token-Based Pricing

Common for Text Models: In the case of large language models like GPT, token-based pricing is often used because it directly correlates with the amount of text processed (both input and output). Since token count determines the processing effort required, it serves as a fair metric for charging users based on resource usage.

2. Time-Based Pricing

Usage in Real-Time Processing: Some AI systems, particularly those with more real-time needs like speech recognition or video processing, may charge based on the time spent processing the input, such as seconds or minutes of audio or video analyzed.

3. Subscription or Tiered Pricing

For SaaS Models: Many AI services, particularly in cloud-based platforms, use subscription models where customers pay a fixed price based on the volume of usage (like API calls) or a set of features included. These may include monthly or yearly subscriptions. Some platforms offer tiered pricing, where higher levels come with more features, increased usage limits, or priority processing.

4. Pay-Per-Request or Pay-Per-Feature

For Specialized AI Services: Certain AI platforms, especially those in fields like image recognition, video processing, or AI-driven analytics, may charge based on specific requests or features used. This might be based on the complexity of the task (e.g., detecting objects in an image vs. simple image tagging).

5. Resource-Based Pricing

For Model Training or Compute-Intensive Tasks: When training large models or using cloud-based AI infrastructure, pricing may be based on the compute resources used (such as CPU/GPU time or memory). In these cases, you’re paying for the underlying infrastructure that the model runs on.

The post What are input and output tokens in AI? appeared first on Engineering.com.

]]>
Siemens, Enemalta Partner to Digitally Transform Malta’s Grid with Gridscale X https://www.engineering.com/siemens-enemalta-partner-to-digitally-transform-maltas-grid-with-gridscale-x/ Mon, 18 Nov 2024 12:06:40 +0000 https://www.engineering.com/?p=134097 Gridscale X supports to manage outages, identify grid congestions and optimize operations with data-driven insights.

The post Siemens, Enemalta Partner to Digitally Transform Malta’s Grid with Gridscale X appeared first on Engineering.com.

]]>
ZUG, Switzerland, Nov 18, 2024 – Siemens and Enemalta will join forces to digitally transform its grid infrastructure with Siemens’ Gridscale X software platform. Levering the software’s low-voltage management capabilities, Enemalta aims to enhance its service, by reducing outage times for more than 450,000 people and enabling more efficient network management to reduce grid congestion and capacity constraints.

Siemens drives digital transformation of Maltese grid infrastructure for improved reliability

Enemalta, the leading energy services provider in the Maltese Islands, is responsible for distribution of electricity and development of the national distribution grid. Malta has recently faced significant electricity challenges driven by extreme weather. In 2023, a major heatwave brought record-high temperatures, leading to increased electricity demand for air conditioning, and widespread power outages across the country. Enemalta mobilized resources to restore power, achieving stability after several intensive days of repairs. Recognizing that physical network reinforcements alone would not suffice, Enemalta identified the necessity for a comprehensive software solution to enhance operational and planning capabilities. 

“Grid congestion has become a real threat to the energy transition. That’s why we are proud to be working with Enemalta to address their key operational challenges, moving from reactive maintenance to predictive management. With Gridscale X, Enemalta will be able to take the next steps towards improved grid reliability and service for people in Malta. Together, we are advancing the standards for digital transformation in the energy sector,” said Sabine Erlinghagen, CEO of Siemens Grid Software.

“Malta is ramping up second generation smart meter coverage. With additional data unlocked, it will be possible to manage grids on a new dimension. We are delighted to be partnering with Siemens to modernize the grid of Malta with next-generation software and improve reliability for our customers. With Siemens’ expertise and technology, we can enhance the network’s digital capabilities, and lay the groundwork for future innovations, including flexibility management,” added Ryan Fava, Executive Chairman of Enemalta.

About Gridscale X 

Gridscale X is a platform that paves the way for autonomous grid management. It is part of Siemens Xcelerator, an open digital business platform that enables customers to accelerate their digital transformation easier, faster, and at scale. Siemens’ Gridscale X platform offers powerful capabilities designed to bring transparency into the low-voltage network. By utilizing data from smart meters, the solution proactively detects outages, visualizes grid congestion, and delivers actionable insights through advanced analytics.

The pilot phase at Enemalta is expected to begin in November 2024, followed by a longer rollout over the next three years, ultimately leading to full grid modernization.

For more information, visit siemens.com.

The post Siemens, Enemalta Partner to Digitally Transform Malta’s Grid with Gridscale X appeared first on Engineering.com.

]]>
MAASS Releases Multi-Material Stereolithography Printer https://www.engineering.com/maass-releases-multi-material-stereolithography-printer/ Mon, 18 Nov 2024 11:48:41 +0000 https://www.engineering.com/?p=134091 MMSLA introduces innovative technologies to overcome key challenges in AM.

The post MAASS Releases Multi-Material Stereolithography Printer appeared first on Engineering.com.

]]>
NASHUA, NH, Nov 18, 2024 – MAASS announced the sale of their Shimmy MMSLA (multi-material stereolithography) 3D printer, marking their transition from internal development to a commercial product by introducing a groundbreaking platform for multi-material additive manufacturing research and development.

Shimmy w open lid

“Huge strides have been made in resin printing, but absolutely nothing for printing with more than just one” said JF Brandon, CEO of MAASS. “The Shimmy MMSLA represents that shift by enabling unprecedented control over multiple materials at the microscale, opening new possibilities for integrated electronics, smart materials, and next-generation manufacturing processes.”

Breakthrough Multi-Material Technology

MMSLA introduces several technological innovations that address long-standing challenges in additive manufacturing:

  • Dual-vat system enabling simultaneous printing of two distinct materials
  • Resolution capabilities ranging from 0.5 to 50 microns
  • Integrated cleaning system that prevents cross-contamination between materials
  • Support for high-conductivity and dissolvable support materials
  • Build volume of 78mm x 51mm x 141mm optimized for R&D applications
USB Key Circuit

Enabling Next-Generation Applications

The system’s unique capabilities make it particularly valuable for several emerging applications:

  • 3D printed electronics with 2-mil (50 micron) conductive traces
  • Complex non-planar circuit designs
  • Dissolvable support structures for ultra-high-quality surface finishes
  • Research and development of smart materials
  • Rapid prototyping of multi-functional devices
Circuit Coupon – Detail

Strategic Focus on R&D and Future Manufacturing

While the initial release targets the research and development market, MAASS’s technology platform is designed to scale for future manufacturing applications. The company’s roadmap includes developing higher-throughput systems based on the same core technology for mass production of multi-material parts.

A Winning Team

MAASS was developed internally by Nectar Labs, a boutique Research and Design lab. JF Brandon is a partner at Nectar with Nicholas Coluccino, and together led the team from concept to reality in less than 18 months. Nectar has developed software and hardware products for major entities like Forest Stewardship Council and Heifer International. JF Brandon is an accomplished entrepreneur in the additive manufacturing space. Brandon’s track record includes key roles in successful ventures like GrabCAD and BotFactory, and his innovations have been recognized through major awards including the $50,000 Change the Course Competition and special mention from Autodesk/Future of Manufacturing.

For more information, visit maass3d.com.

The post MAASS Releases Multi-Material Stereolithography Printer appeared first on Engineering.com.

]]>
Siemens’ Altair play: strategic AI move or simulation catch-up? https://www.engineering.com/siemens-altair-play-strategic-ai-move-or-simulation-catch-up/ Mon, 04 Nov 2024 15:55:55 +0000 https://www.engineering.com/?p=133577 For Siemens, the challenge lies in more than simply acquiring AI—it’s about operationalizing it.

The post Siemens’ Altair play: strategic AI move or simulation catch-up? appeared first on Engineering.com.

]]>
Siemens’ acquisition of Altair Engineering, a leader in Artificial Intelligence (AI), simulation, and high-performance computing (HPC), reflects a bold ambition to strengthen its AI-driven industrial software portfolio. As Tony Hemmelgarn, President and CEO at Siemens Digital Industries Software, said: “this will augment our existing capabilities with industry-leading mechanical and electromagnetic capabilities and round out a full-suite, physics-based, simulation portfolio as part of Siemens Xcelerator.”

With a foundation already set in AI and generative AI capabilities, Siemens is taking a strategic leap to deepen its offerings in areas such as Product Lifecycle Management (PLM) and Digital Twins.

Yet, the acquisition raises critical questions: Is Siemens advancing its strategic edge by embedding next-level AI and knowledge graph technologies, or is it scrambling to keep up in a landscape that is moving faster than ever?

Elevating AI-driven PLM and digital twins

Siemens’ integration of Altair’s powerful AI, simulation and high-performance computing tools into its PLM tech suite, particularly within Teamcenter and Simcenter, offers a potential transformation in how digital twins and simulations are used across engineering and manufacturing. Altair’s deep expertise in physics-based simulations, including mechanical and electromagnetic modeling, could allow Siemens to develop more sophisticated digital twins that not only represent physical products but also predict behaviors and outcomes with high fidelity.

With Altair’s technology, Siemens can push digital twin capabilities beyond basic visualization and monitoring, creating a system that incorporates real-time data, predictive analytics and adaptive simulations. This would enable manufacturers to make informed, AI-driven decisions at every stage of the product lifecycle, from design and development to production and maintenance.

However, despite Siemens’ existing portfolio, which includes substantial AI and generative AI tools, the acquisition raises a critical question—how effectively can Siemens embed these capabilities as a core, transformative feature within its PLM platform? Without a clear path to seamlessly integrate AI across its offerings, Altair’s capabilities risk being relegated to auxiliary add-on features, potentially limiting their business impact. For Siemens, this move is more than just adding tools; it’s about embedding intelligence deeply within the end-to-end PLM framework, making AI a central component of its digital transformation strategy.

Enhancing digital twins with HPC

Siemens is marketing itself as a leader in digital twin technology, primarily through its Xcelerator platform, which integrates real-time operational data to improve asset management, production efficiency and product quality. Altair’s HPC capabilities could significantly enhance Siemens’ digital twin offerings by allowing more complex, detailed, and faster simulations—an essential component of predictive maintenance and optimization for manufacturers.

The integration of HPC into Siemens’ digital twin ecosystem could be transformative, enabling simulation models that accommodate an unprecedented scale of data and complexity. For instance, manufacturers could simulate entire production lines or supply chain networks, gaining insights that help them optimize operations, reduce energy consumption, minimize downtime and predict implications from product changes. This is particularly relevant as industries move toward more sustainable and resilient operations.

However, leveraging Altair’s HPC across Siemens’ existing infrastructure poses some challenges. HPC solutions typically require specialized infrastructure, substantial processing power and technical expertise. Siemens will need to carefully consider how to bring HPC capabilities into mainstream use within its portfolio, including positioning within its maturing SaaS offering. The risk here is that without a robust integration plan Altair’s HPC tools may remain isolated and less affordable, providing limited impact and reducing the transformative potential of this acquisition.

Knowledge graph technology: connecting data with digital thread

Altair’s recent acquisition of Cambridge Semantics, a developer of knowledge graph and data fabric technologies, brings new dimensions to the integration of enterprise data across complex manufacturing ecosystems.

Knowledge graphs provide a framework for Siemens to unify and contextualize vast amounts of data from disparate systems—an essential step for effective AI-driven insights and accurate digital twin models. With knowledge graphs, Siemens could break down data silos, connecting information from PLM, digital twins, and other systems into a cohesive whole, creating a seamless digital thread across the lifecycle.

Incorporating Cambridge Semantics’ knowledge graph technology into Siemens’ portfolio could lead to a new era of “data-rich” digital twins, where structured and unstructured data come together to provide a more comprehensive, actionable view of products, assets and operations. By grounding generative AI models in real-world data, knowledge graphs could improve response quality and deliver contextual insights, allowing engineers and operators to make better, faster decisions.

Yet, the question remains: can Siemens adapt this advanced data integration technology effectively in an industrial setting? Cambridge Semantics’ data fabric has been proven in sectors like defense, life sciences, and government. Adapting it for manufacturing will require Siemens to navigate industry-specific complexities. Without careful implementation, the risk is that knowledge graph technology will be underutilized—merely another tool rather than a strategic game-changer in Siemens’ PLM and digital twin offerings.

Strategic opportunity or catch-up?

The acquisition of Altair could empower Siemens to lead in AI-driven PLM, high-fidelity simulations and data-enriched digital twins. But the road ahead demands more than technological additions; it requires Siemens to deeply integrate these capabilities within its core platforms and ensure they serve as transformative, essential components rather than optional add-ons.

For Siemens, the challenge lies in more than simply acquiring AI—it’s about operationalizing it. By embedding Altair’s and Cambridge Semantics’ technologies as central pillars in its software ecosystem, Siemens has the opportunity to redefine industrial intelligence in manufacturing. Can Siemens realize this vision to become a true leader in AI-driven industrial software, or will it struggle to fully leverage these assets, ending up as a late entrant in a rapidly advancing field?

The post Siemens’ Altair play: strategic AI move or simulation catch-up? appeared first on Engineering.com.

]]>
IMSI Design Releases TurboCAD v2024.1 https://www.engineering.com/imsi-design-releases-turbocad-v2024-1/ Fri, 25 Oct 2024 12:59:50 +0000 https://www.engineering.com/?p=133271 This Service Pack includes TurboCAD Copilot integration and 50+ improvements and fixes.

The post IMSI Design Releases TurboCAD v2024.1 appeared first on Engineering.com.

]]>
NOVATO, CA, Oct 25, 2024 – IMSI Design announced the release of TurboCAD 2024.1, including Platinum, Professional, Deluxe and Designer versions for Windows desktop PCs. Their dedicated TurboCAD team has meticulously crafted this follow-up Service Pack, tailored specifically for our loyal community.

This Service Pack features the integration of TurboCAD Copilot and over 50 tweaks and bug fixes, showcasing cutting-edge innovation.

AI-Powered TurboCAD Copilot Technology

The TurboCAD Copilot feature introduces an AI-driven companion engineered to elevate the design journey. It serves four primary purposes:

  • Responding to help related queries how to use the software.
  • Delivering data-driven insights about CAD files.
  • Utilizing an extensive AI knowledge base for a wide range of questions.
  • Using Text to Image AI to create textures and backgrounds for photo rendering.

This integrated AI tool streamlines navigation, expedites the project’s progress, and enhances design endeavors with insightful analysis.

TurboCAD Copilot supports two levels: TurboCAD Copilot Help (free 1-year-subscription for all TurboCAD variants) and TurboCAD Copilot Professional (available as 1-year-subsription service).

TurboCAD Copilot Help uses “RAG” (Retrieval-Augmented Generation) and documentation to search for content relevant to questions. TurboCAD Copilot Professional includes these Help features and offers additional capabilities such as “Talk to your CAD Data”, general AI access, and Text to Image. Furthermore, it adeptly handles diverse multilingual requests, from guiding through the initial steps of using TurboCAD® to sharing intriguing details about a file or providing insights on design trends and principles.

“With the introduction of TurboCAD Copilot, we are entering a new era of CAD that allows the designer not only the ability to talk to documentation but to talk directly to their CAD data. Talk to CAD revolutionizes how AI can interact directly to data in the context of CAD objects, geometry, topology, parameters, and custom attributes. Designers can now ask questions that were previously constrained by traditional UI conventions—questions like ‘Examine my file for 3D printing suitability and suggest ways to reduce printing costs’—providing a new way to enhance quality and productivity,” states Tim Olson, vice president of IMSI Design”

A Closer Look at TurboCAD 2024.1’s Exciting New Features

Dive into the innovative enhancements of TurboCAD 2024.1 with their comprehensive video overview. This visual guide showcases all the new features that make TurboCAD 2024 a leader in CAD software.

To quickly see the list of new and key features in TurboCAD Platinum, TurboCAD Professional, TurboCAD Deluxe 2D/3D, and TurboCAD Designer 2D, check out TurboCAD 2024 New Feature Comparison and TurboCAD 2024 Key Feature Comparison.

Availability and Pricing

They’re delighted to share that this Service Pack is available at no additional cost for existing TurboCAD 2024 users. For those using older versions of TurboCAD, upgrade pricing is available.

“Our commitment to delivering top-tier design tools is evident in the TurboCAD 2024.1 Service Pack, which not only introduces TurboCAD Copilot but also includes numerous enhancements, over 50 bug fixes, and critical maintenance updates. These improvements demonstrate our dedication to providing a seamless and advanced user experience,” outlines Rita Buschmann, senior product manager, CAD and Home Design.

Their product lineup includes:

  • TurboCAD Platinum: Their premium package, priced at $1,499.99.
  • TurboCAD Pro: Packed with advanced tools for detailed design work, available for $999.99.
  • TurboCAD Deluxe: Offers a comprehensive toolkit for various design projects, for $299.99.
  • TurboCAD Designer: Dedicated to 2D design, perfect for beginners, at an affordable $99.99.

For those seeking a more dynamic design experience, TurboCAD Copilot Professional is offered with a 1-year-subscription for $199.99.

Additionally, their Training Guides, Add-ons and Symbols are specially designed to complement and expand capabilities when working with TurboCAD 2024.

For more information, visit TurboCAD.com or imsidesign.com.

The post IMSI Design Releases TurboCAD v2024.1 appeared first on Engineering.com.

]]>
How to fight technological inertia to make key improvements https://www.engineering.com/how-to-fight-technological-inertia-to-make-key-improvements/ Tue, 08 Oct 2024 14:33:16 +0000 https://www.engineering.com/?p=132601 This CMM use case demonstrates how overcoming this inertia is essential to staying competitive.

The post How to fight technological inertia to make key improvements appeared first on Engineering.com.

]]>


Using a portable measurement arm (Image: Frontier Metrology)

Continuous improvement is the name of the game in manufacturing, paring away inefficiencies and making projects more profitable–something management gets very excited about. However, while some improvements can be made with small investments, such as kanban cards or kaizen bins, other process improvements require more significant investments. Bringing in new technology costs hundreds of thousands to millions of dollars. And the initial outlay for a piece of equipment is just the beginning. Training, upskilling and shop space may be associated costs. When it comes time to open the checkbook, some manufacturing decision-makers start singing a different tune: why do we need these improvements again? What’s wrong with the old process?

When it’s time to change, manufacturers may face technological inertia, the phenomenon by which the accumulated knowledge and experience with one technology exerts a pressure against effectively using a new technology. One example of this is the portable coordinate measuring machine, or CMM arm, which can drastically reduce inspection times and drive immediate production quality improvements.

So how can manufacturing managers successfully navigate an opportunity to invest in new technology to improve a process, without getting stuck in the inertia of the known?

What is technological inertia?

One example of technological inertia might take place at a busy doctor’s office considering a transition from x-ray to CT scans: it’s more complicated than just buying the machine, because:

  • technicians know how to take x-rays and may require new training
  • patients are already familiar with what’s required for an x-ray
  • Doctors are more experienced in interpreting x-rays than CT scans.
  • The business can’t afford to see a ‘dip’ in quality of care during an adjustment period.

So, instead of investing in the CT scanner, the business may purchase more x-ray machines, to keep up with increasing demand without having to adopt the new generation of technology. In this way, the inertia of the old technology inhibits any improvements the new technology could bring.

Overcoming this inertia to realize the benefits of a new technology is essential for companies to stay competitive and drive profits.

To learn more about how manufacturing decision makers can overcome this inertia and find their way to the other side, engineering.com spoke to two engineering and metrology professionals with experience not only with portable CMM arms, but also the technology and processes they replace, such as manual measurement techniques like plate layout, verniers, and gauge blocks; but also the venerable bridge CMM in the climate-controlled quality lab.

Alex Dunn is a Manufacturing Engineer and Measurement Specialist with ten years experience in the gas turbine manufacturing industry. He’s watched as portable measurement arms have developed from early models limited in accuracy, to today’s models which may rival the accuracy of the bridge CMM.

Fabrizio Beninati is the owner of Frontier Metrology, a metrology service provider and FARO distributor based in Ontario, Canada. Working with FARO and Polyworks, Fabrizio has seen the full gamut of metrology workflows, from the creative to the archaic, as he demos and promotes the capability of the portable FARO arm equipped with a laser line scanner.

Quality and Metrology in Manufacturing

Ever since the industrial revolution brought the innovation of interchangeable parts, measurement and quality control have been essential parts of manufacturing. Tolerances limiting the variation of part dimensions are set by engineers according to the requirements of the part, and quality control processes ensure that production matches those dimensions accurately, within set tolerances. The instruments that enable quality control include the humblest tape measure in the construction industry, to the ISO 10360 certified, gold-tipped Zeiss CMM with 0.3 µm accuracy, and everything in between.

A few common metrology solutions in manufacturing include:

  • Manual instruments such as micrometers, depth gauges and verniers
  • Granite surface plates used in conjunction with tools such as height gauges, gauge blocks, and dial indicators
  • Optical devices such as comparators, shadowgraphs, and profile projectors
  • A typical 3-axis bridge or gantry CMM, which can be programmed to take highly-accurate measurements using probe contact
  • Portable CMMs, which digitally read the joint positions of an arm to interpret the 3D position of the measurement device, such as a probe or digital optical system.
  • Digital optical systems, such as 3D scanners.
  • CAD software is an essential part of digital metrology solutions, as collected measurement data can be compared to the CAD model reference to determine and report on deviations.

For manufacturers used to one of these processes, inertia including personnel training, costs of new equipment, and lack of knowledge of the benefits and ROI of new alternatives make it difficult to implement new solutions, such as portable CMM.

Advantages and applications of Portable CMM

In the typical machine shop or fabrication shop, the headache of metrology is that manual tools such as verniers are fast but not sufficiently accurate or repeatable, while the bridge CMM is highly accurate–even surpassing many projects tolerance requirements–but too slow, especially for new projects, when it needs to be programmed. In Alex Dunn’s experience as a Measurement Specialist, while many of the arm CMMs on the market today can’t match the accuracy of a bridge CMM, they can typically hold tolerances above 1 thou. For applications with more relaxed tolerances, such as fabrication, the speed and flexibility of portable CMMs is unmatched. “You grab the arm, calibrate the probe once, you don’t have to worry about calibrating different angles, different styli. You just calibrate the probe and it’s good for the whole volume of measurement. You’re not concerned with crashing the machine. You’re not concerned with having to bring parts into the temperature controlled lab. You get to go to your parts,” explained Dunn. “The big power of a bridge CMM is automation. If you have a high volume of parts that need to be measured, you program the machine and you can train an operator to run the machine, get the data, and do what they need to do with the parts.” While this automation makes for a faster process at high volumes, it also comes with costs.

First, the CMM needs to be programmed, not unlike a CNC mill, to make the movements required to bring the probe to touch off at each measurement location. Secondly, a CMM is a very expensive piece of equipment. If the quill or the probe crash into the part, you’ve not only scrapped a part, but the CMM must now be calibrated or repaired. “If you don’t have high volume, the portable solution is much faster because you’re not programming a machine,” said Dunn. “You’re just simply operating the instrument physically, so you’re not going to be concerned with your clearance planes, or your stylus calibration. You can obviously damage your part if you hit it with the instrument, but the risk of collision is much less. You can very rapidly get the measurement you need and move the part on. So you gain a lot of speed in that regard.”

Dunn recommends the Hexagon Romer arm. “If you pair that with the [Hexagon] AS1 laser line scanner, which I was fortunate enough to have at a previous employer, you can tackle a lot of projects with that because you have the ability to do touch probing as well as scanning for large surfaces.”

This pain point of slow cycle time is why many customers call Fabrizio Beninati at Frontier Metrology to learn about arm CMM technology. For Beninati, understanding the advantages and applications of portable CMM systems is key to not only selling them as a distributor, but also using them himself as a metrology service provider.

Beninati finds that many manufacturers in the automotive industry are moving some inspection tasks to portable CMM solutions, finding conventional CMM too slow and cumbersome. “A lot of our clients are leaving the CMM behind, or dedicating the CMM to the high precision work and switching to the arm to do a higher volume of parts.” with the arm and laser line scanner in conjunction with polyworks, said Beninati, “customers are able to quickly scan and target what they need and generate a report in a fraction of the time.” Beninati recommends a FARO arm in conjunction with a laser line scanner and Polyworks software to quickly and easily capture measurement data and use frame of reference inspection to detect and measure deviation.

A part in Polyworks, showing the scanned data mesh (real part dimensions) compared to the CAD reference, with deviation callouts. (Image: Frontier Metrology)

“CMM is slow, it’s methodical, it’s delicate work,” said Beninati. “You don’t rush CMM because probes are expensive, the machines expensive, everything is very methodical in the CMM world. The arm is more freehanded and forgiving to capture data.”

Beninati echoed Dunn in the idea that while portable systems may not be able to measure as accurately as bridge CMMs today, they still find applications in precision manufacturing. “We have customers that use a bridge CMM for their initial process capability studies, and they say once we’re within our values now we’ll just inspect using the FARO arm because you can do tenfold more vs the CMM.” Both Dunn and Beninati anticipate the accuracy rating of portable systems creeping up in the future, to rival that of larger machines.

How to Overcome Technological Inertia and Improve Manufacturing Processes

According to Beninati, technological inertia is a major reason why manufacturers drag their feet or fail to upgrade to faster, more efficient processes, such as a portable CMM. When an aerospace manufacturing shop hired Beninati as a service provider to do inspection using the FARO arm, he pitched a portable system for the customer to buy, so that they could implement it in-house. “They told me they use shadowgraphs. They measure using a shadow and grid cells, counting them out manually,” he explained. “It works, and they say, ‘why improve it? Why get an arm?’ they can just do it by counting.”

Stay up to date

The leaders who successfully navigate a technology change are those who empower themselves and their teams with knowledge about new emerging technologies. Without an understanding of emerging technologies, it’s impossible to see how they may apply in your processes. Beninati highlighted trade shows and supplier demos as key tools for manufacturing leaders to help stay in the know.

“Find those couple trade shows that are on the cutting edge,” said Beninati. “That’s the vendors’ time to shine. Go to trade shows where the big players go, and see the new and emerging technologies. Stay open to possibilities.”

Dunn also highlighted demos as a key knowledge tool. “I always advise folks, if you’re interested in any of these instruments contact the suppliers because it’s their job to demo the equipment and bring application engineers to you,” he said. “You can put the product right in front of them and say, ‘show me how to measure it,’ And it’s their job to prove to you that they can. Then everyone wins.”

Upskill employees

Part of technological inertia is cultural. It’s natural for individuals, especially in their jobs, to resist and fear change and the instability it may bring. When a new technology reduces labor hours required for a process, workers may wonder if they will lose those hours. Communication is key to confronting this mindset and assuaging these fears. Managers can address these fears by highlighting the benefits of upskilling for employees.

“I’ve seen cases where workers want the arm because they see it as a new skill set, a new and emerging technology,” said Beninati. “But I’ve also seen quality teams who have seen it  as a threat to their jobs.” However, in his experience, customers that do get started with a portable CMM find applications for both the new instrument as well as their existing CMM, for example, leading to more opportunities for metrology personnel than before.

Plan funding strategically

If there is no budget, the benefits of a new technology don’t matter. Beninati has seen this firsthand as a supplier. When a customer assembling parts sourced from many suppliers began seeing quality variations, they implemented a manual metrology process using height gauges and blocks to measure each assembly by hand.  “They were taking up to 3 hours for each assembly,” recalled Beninati. “I went in there, did the demo, scanned it in like 3 minutes, and even had their guy–who had never touched a FARO arm–try it. It took him 8 minutes, so I said, ‘How many can I put you down for?’ And we all laughed, but today it’s waiting on upper management.” Even though Beninati and the FARO arm demonstrated crystal clear ROI, cost stood in the way.

“If you’re an OEM supplier or plant and you secure that big contract, you’ve already put in your budget the building expansion, the new tools, new equipment. So in for a penny, in for a pound. You didn’t put in that FARO arm because you’re going to do it with verniers and calipers, you’re committed until the next job comes around.”

In Dunn’s experience, that’s the best time to propose technological change: when new money comes in along with a new contract.

“Usually when your processes are locked in place and you’re used to a certain amount of revenue from your parts, it’s really really hard to go to management and say, ‘we need you to eat into those profits so that I can have this piece of equipment.’ They’re going to say, ‘why? I’m making this money, and you’re telling me I’ll make less money to deliver the same product?’ That’s a really hard thing to sell,” he explained. When budgeting new money to different project needs, it’s easier for employees to propose new technology using the funds earmarked for the existing process. “Let’s say $40,000 is all it takes to get a standard probing package arm,” said Dunn. “You can buy 40,000 worth of gauges no problem. So at that point, you can approach management and say hey, here’s a solution that will work for this project, the money is set aside for it, and we can then use the piece of equipment to improve other processes in future.”

Listen to employee experience

Lastly, in addition to suppliers and trade shows, your employees have knowledge of new technologies and alternative ways of doing processes that you can unlock. As experienced machinists, engineers and workers move around the industry, they carry knowledge of how the industry is moving forward, and they bring this knowledge to your company.

“Who is more likely to be aware  and plugged into these technologies? It’s gonna be your shop floor guys,” said Dunn. “a lot of employees arrive at your shop with experience with other equipment, and bring that knowledge and advice. That opens doors and minds to new alternatives. Management can’t spread themselves so thin as to become as intimately familiar with the process as well as a machinist or fabricator. So, good managers listen to their employees.”

Where will your next process improvement take you?

While technological inertia has sunk many ships across industries and sectors, the keys to navigating it come down to basic, effective management practices:

  • Stay up to date in your industry
  • Leverage the experience of your employees
  • Understand the ROI
  • Don’t fear change

Armed with these principles, you’ll be ready to ride the next technological wave and stay ahead of the competition.

The post How to fight technological inertia to make key improvements appeared first on Engineering.com.

]]>
Optimization of processes and automation systems https://www.engineering.com/optimization-of-processes-and-automation-systems/ Wed, 28 Aug 2024 19:32:18 +0000 https://www.engineering.com/?p=131340 Regardless of the industry or technology, rarely is any system optimized at startup.

The post Optimization of processes and automation systems appeared first on Engineering.com.

]]>

Experienced design engineers can certainly estimate cycle times, throughput, quality and uptime. However, the complexity of processes and associated controls leave plenty of room for fine-tuning during engineering and after installation. It should be noted that optimization is not the same as continuous improvement. Optimization is refinement conducted on a current process where continuous improvement generally refers to changes to the process or systems. These can be done concurrently but it is best to optimize first and then concentrate on continuous improvement.

Simulation and modeling

System simulation after initial design can be used quite effectively before the final design is complete. Simulation time and costs should be worked into a project whenever possible as the payback can be quite significant. These tools can be used to test and validated designs. Identifying potential issues at the design stage can reduce the risk of costly mistakes that could delay commissioning or cause problems afterwards. Simulation can be used to evaluated machining and automation processes and serve as a tool to facilitate conversations through the project.

Process Analysis and Mapping: Some simulation can be detailed enough to be considered a “Digital Twin” of the system. Digital Twins allow for even more detailed simulations to take place. System behaviour can be evaluated under very specific conditions and inputs creating a map that enables continuous optimization and testing without disrupting actual operations. As well, a properly designed and updated Digital Twin can then operate simultaneously with an actual system providing some future predictability.

Current State Analysis: Models encourage designers to more thoroughly understand existing processes. Creating an accurate model involves documenting every step, input, output, and resource used within the system. The goal is to have a clear and comprehensive overview of how the system operates. This in turn sets the foundation for identifying areas of improvement.

Once the process is mapped out and simulated, designers can identify points in the process where delays or inefficiencies occur. These could be due to machine limitations, inadequate supply of materials, or other factors that slow down the process. By visualizing the flow of materials and information through the system, designers can enhance and streamline processes and eliminate waste.

Data monitoring and collection

An often-neglected step in the improvement process is proper and accurate data monitoring and collection. A logical and systematic approach is required with emphasis on using the proper tools for the job. If good data is not collected, the subsequent analysis will be flawed. It is extremely important to try and understand what data is needed and how accurate that data must be. A camera system, for example, intended for use as image collection or shape recognition may not be able to measure dimensional attributes for quality purposes.

Sensor Integration: Integrating sensors into machinery and processes allows for real-time data collection of various parameters such as temperature, pressure, speed, and more. This data is crucial for determining system performance and possibly identifying areas for improvement. Forward planning will help reduce costs by designing in connections and associated hardware during equipment build.

Analysis: Once the data is collected, advanced analytics can be used to help identify patterns, trends, and anomalies. Deviations from expected performance metrics may indicate issues that need addressing. Analysis can be performed on-site or even remotely by a third parts that specializes in big data collection and analysis. Modern AI learning algorithms can now proactively predict potential equipment failures before they occur minimizing downtime and even extending the lifespan of machinery. Monitoring robot joint motor performance, for example, can be used to trigger a preventative maintenance activity before a problem leads to a significant breakdown.

PLC systems

Coding: Writing code is not necessarily difficult, however, writing efficient, modular, and well-documented PLC code takes time, planning and experience. Good coding techniques is crucial to make the code easier to maintain and modify, reducing the likelihood of errors and enhancing system reliability.

Error Handling and Debugging: Robust error handling routines are essential for quickly identifying and resolving issues by operator and maintenance personal. This must be specified early as a great deal of time and effort is required. The payback is reduced downtime and smooth systems operation.

Human-Machine Interface (HMI): Designing intuitive and user-friendly interfaces makes it easier for operators to control and monitor systems. This can reduce the likelihood of operator errors and improve overall system efficiency. Providing real-time feedback and alerts to operators allows for quick responses to issues. This can include notifications about performance deviations, maintenance needs, or system faults.

Automation and robotics optimization

Path Optimization: It is important that path creation is done by experts. However, in many robotic systems, there will still be room for improvement. Optimizing the movement paths can significantly reduce cycle times and energy consumption. This involves programming robots to take the most efficient routes. Using joint moves can be faster then calculated linear or curved routes. However, creating intermediate points can sometimes force a robot to behave less erratically.

Cycle Time Reduction: Streamlining operations to reduce the time taken for each cycle of operation increases overall throughput. This can involve optimizing tool changes, reducing setup times, and eliminating redundant steps. The goal is to minimize unnecessary movements and dwells times to reduce non-value added motion.

Continuous improvement and lean methodologies

Many companies follow specific techniques to refine processes. Regardless of the methodology employed, most can be used for both optimization and continuous improvement. It should be noted that these are only tool to effectively create positive change but specific expertise in the specific method is necessary. A culture of improvement is a tremendous benefit and should not be discounted. Some examples are:

Kaizen: Implementing a culture of continuous improvement, known as Kaizen, encourages regular evaluation and enhancement of processes. This approach focuses on making small, incremental changes that collectively lead to significant improvements. The process is allowed to stabilize before moving onto the next development opportunity. Since this approach represents a culture, it does not matter if the target is quality, maintenance, cycle time, operation or some other enhancement.

Six Sigma: Utilizing Six Sigma methodologies helps reduce process variation and eliminate defects. This is a data driven focused process using a statistical approach in decision-making to improve process quality and efficiency. Although this method is mainly targeted to process improvements that effect quality, a thorough analysis of data can lead to discoveries in many areas that can be a benefit.

By advance planning and the careful implementation of some of these strategies, organizations can achieve significant improvements in the performance, efficiency, and reliability of their automation systems and associated processes. A holistic approach, involuting multiple tools and a variety of personnel can enhance productivity and minimize waste.

The post Optimization of processes and automation systems appeared first on Engineering.com.

]]>
The 5 layers of digital transformation https://www.engineering.com/the-5-layers-of-digital-transformation/ Fri, 19 Jul 2024 16:58:33 +0000 https://www.engineering.com/?p=52440 How to think about digital integration and transformation within a company or process.

The post The 5 layers of digital transformation appeared first on Engineering.com.

]]>

Embarking on digital transformation for an aerospace manufacturing company signifies a strategic shift towards integrating advanced digital technologies across all facets of operations.

This includes using technologies such as Industrial Internet of Things (IIoT) for real-time monitoring of equipment and systems, implementing artificial intelligence (AI) and machine learning algorithms for predictive maintenance and optimized production scheduling and adopting digital twins to simulate and optimize the performance of aircraft components and systems.

The digitalization pyramid

The digitalization pyramid is a conceptual framework used in industrial and organizational contexts to illustrate the levels of digital integration and transformation within a company or process.

It consists of several layers or stages, each representing different aspects of digitalization. While variations exist, a common representation includes the following layers:

Data collection: The base layer of the pyramid involves the collection of raw data from various sources within the organization or across the value chain. This data can come from sensors, machines, devices, databases or virtually any system that collects data.

Data integration: The next layer is about integrating and consolidating the collected data into a unified format or system. This stage ensures that data from different sources can be accessed, processed and analyzed.

Data analysis: You guessed it. This layer is about analyzing the integrated data to derive insights, trends, patterns and actionable information. Techniques such as statistical analysis, machine learning and artificial intelligence are a natural fit here.

Digitalization: This layer involves the transformation of business processes and operations using digital technologies and insights gained from data analysis. It includes automation, optimization and the use of digital tools to streamline workflows and improve efficiency.

Digital transformation: This last phase is the goal of the entire exercise and represents the strategic adoption of digital technologies to fundamentally change how a business operates, delivers value to customers and competes in the market. It may involve new business models, innovative products or services and a shift towards a more data-driven and agile organization.

This is a basic roadmap for organizations looking to evolve and harness the power of digital technologies, but nothing about this process is basic. Each one of these phases is made up of many complicated initiatives and no company can do this properly without good partners in the process.

What’s the difference between digitization and digitalization?

The terms “digitization” and “digitalization” are related but have distinct meanings in the context of technology and business transformation:

Digitization refers to the process of converting information or data from analog to digital form. It involves transforming physical or analog artifacts (such as documents, images, videos or processes) into digital formats that can be digested, stored, and transmitted electronically. Examples include scanning paper documents to create digital copies, converting analog audio or video recordings into digital formats or creating digital records of interactions between machines.

Digitalization is the broader process of integrating digital technologies into various aspects of business operations, processes and strategies to fundamentally change how they operate and deliver value to customers. It relies on digital technologies (like AI, IoT, cloud computing, data analytics) to improve efficiency, create new business models, enhance customer experiences and innovate within an organization. Some examples would be implementing IoT sensors to gather real-time data for predictive maintenance, using AI algorithms to automate decision-making processes, adopting cloud-based solutions for scalable operations or redesigning customer interactions through digital channels.

The post The 5 layers of digital transformation appeared first on Engineering.com.

]]>