Wayne Schaefer, Author at Engineering.com https://www.engineering.com/author/wayne-schaefer/ Wed, 29 Nov 2023 13:10:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Wayne Schaefer, Author at Engineering.com https://www.engineering.com/author/wayne-schaefer/ 32 32 Cracking the Code—Overcoming Complexity When Troubleshooting Modern Control Systems https://www.engineering.com/cracking-the-code-overcoming-complexity-when-troubleshooting-modern-control-systems/ Wed, 29 Nov 2023 13:10:00 +0000 https://www.engineering.com/cracking-the-code-overcoming-complexity-when-troubleshooting-modern-control-systems/ New does not necessarily mean better.

The post Cracking the Code—Overcoming Complexity When Troubleshooting Modern Control Systems appeared first on Engineering.com.

]]>
New technology for industrial automation and control systems is often sold with the promise of increased efficiency, advanced functionality, and superior performance. This may be true in some instances; however, is not necessarily the case when it comes to troubleshooting and maintenance. We sometimes loose sight of the fact that newer is not always synonymous with better and older technologies work just fine in modern factories.  Where the adoption of cutting-edge technologies can lead to innovation and increases in productivity, they can also present unique challenges that unnecessarily complicate the troubleshooting processes.

Create separate levels of complexity

The introduction of new technology in control systems can involve complex software, intricate hardware configurations, and sophisticated network communications. This complexity can lead to scenarios where diagnosing and fixing problems becomes more challenging, requiring a higher level of expertise and understanding. For technicians and engineers, the learning curve can be steep, potentially leading to longer downtimes during problem resolution especially when accustomed to more traditional systems.

New technologies can also come with dependencies on specific vendors or proprietary solutions, limiting the availability of tools and information necessary for effective troubleshooting. On top of this, the push for interconnected, smart factory environments further complicates this scenario, introducing variables like cybersecurity threats and compatibility issues with legacy systems. Troubleshooting these systems can be demanding due to their complexity and the critical nature of the processes they interface with. The ability to quickly identify and resolve issues is even now more vital to minimize downtime and maintain productivity.

The Role of PLC in Industrial Automation

PLC systems are the backbone of modern industrial automation. They automate complex processes, control and monitor a vast array of processes thus ensuring precision, reliability, and efficiency. In the modern factory, the role of the PLC now extends from controlling basic machine operations to managing intricate details of manufacturing processes. PLC’s are more than just control devices; they are the cornerstone of smart factories and Industrial Internet of Things (IIoT), serving as a bridge between physical operations and data collection and analysis. This complexity makes it more difficult to create a PLC program that is easy to troubleshoot. Design teams need to understand the role troubleshooting plays in efficient maintenance and operation of industrial environments. Specific strategies to design programs that facilitate ease of troubleshooting for the maintenance team will go a long way in ensuring maximum machine uptime and throughput.

Challenges in Troubleshooting

Troubleshooting control systems is a critical skill, given their central role in manufacturing and assembly industries. Effective system repair can be hampered by several key reasons such as:

·         Complexity of Systems: Modern industrial processes are highly complex, making it difficult to easily pinpoint issues. Constant technological advancements mean that technicians must continually update their knowledge.

·         Interconnected Systems: Control systems are often part of a larger network, where faults can have cascading effects. Root cause analysis can be daunting requiring time consuming investigation and analysis.

Best Practices in Program Design

These challenges can be overcome, or certainly minimized, if effort is put into creating a program using good programming practices. This effort must begin at the design stage and continue throughout the duration of the project. Some suggestions fall into the following categories:

·         Clarity and Organization

·         Modular Design

·         Simulation and Testing

·         Diagnostic and Monitoring Tools

·         Training and Skill Development

Clarity and Organization

·         Structured Programming:  Implement a structured approach, dividing the program into manageable sections or modules each handling a specific part of the process. This approach simplifies understanding the program’s flow, making it easier to locate and address issues.

·         Naming Conventions: Adopt clear and consistent naming conventions for variables, routines, and functions. This aids in understanding the program’s flow and function. For instance, a variable controlling a conveyor belt might be named `ConveyorBeltSpeed` rather than a vague `Speed1` or `Var_a`.

·         Commenting and Documentation: Comprehensive commenting within the code and thorough documentation, such as change control logs, are essential. Comments should explain the purpose of code sections, the reasoning behind complex functions, and any peculiarities in the code. A good repair technician can read the code, so avoid creating comments that explain the exact logic of the program but rather tell the “story” of what the program is trying to accomplish. Documentation provided should also provide an overview of the program structure and flow.

Modular Design

·         System Design: As part of Structured Programming, design the system in a modular fashion, allowing for isolation and testing of individual modules. This approach is particularly effective in large systems where isolating problems can be challenging.

·         Reusability: Utilize function blocks for common or repetitive tasks. This not only saves time but also makes the system more uniform, allowing for easier diagnostics. One function can also be made for different but similar tasks. Of course, there are limits on how modular you should attempt. Avoid falling into the trap of creating over-parametrized functions in the drive to create flexibility.

Simulation and Testing – Advanced Strategies for Troubleshooting

·         Simulating Processes: Before deployment, use simulation tools to test the logic and functionality of the control program. This step can reveal issues that might be missed in a static review.

·         Comprehensive Testing: Perform rigorous testing under various scenarios to ensure the program behaves as expected under different conditions. This includes testing for normal operation, abnormal operations, and failure modes.

Diagnostic and Monitoring Tools

·         Built-In Diagnostics: Leverage PLCs’ built-in diagnostic functions and capabilities. These can provide real-time feedback on the status of inputs/outputs, module health, and network status.

·         External Monitoring Tools: Use external software tools for monitoring and logging system operations. These tools can log operational data, providing a historical record that can be invaluable for diagnosing intermittent or complex issues. One underutilized tool is the contact histogram which can map the history of bit changes making root cause analysis easier.

·         Error Handling Routines: Implement robust error handling routines that can detect, log, and, where possible, rectify errors autonomously.  A well-designed diagnostic system can provide extremely useful real-time data.

·         Alert Systems: Develop alert systems that notify technicians of issues promptly, specifying the nature and location of the problem. For example, a routine might detect a motor overload and shut down the system to prevent damage but, at the time same, notify the correct departments electronically of the failure.

Training and Skill Development – Importance of Training

·         Best Practices in Industry: Examine real-world case studies where efficient PLC program design aided in quick troubleshooting and how different environments influence PLC programming strategies. These might include examples from manufacturing, process industries, or building automation.

·         Technician Training: Emphasize the importance of training technicians not just in troubleshooting but also in understanding the logic and structure of the PLC programs. Continuous learning and staying updated with the latest PLC technologies and programming practices should be part of this strategy. Technicians with a deep understanding of the program structure and logic are better equipped to find and fix issues.

The Future of PLC Programming

It should be noted that the future of PLC programming is likely to be shaped by advances in AI and machine learning, which could offer even more sophisticated diagnostic and troubleshooting capabilities. The ability to collect and analyse large amounts of data can not only solve current problems but predict and prevent future issues.

The post Cracking the Code—Overcoming Complexity When Troubleshooting Modern Control Systems appeared first on Engineering.com.

]]>
Human Machine Interface Engineering https://www.engineering.com/human-machine-interface-engineering/ Mon, 23 Oct 2023 12:05:00 +0000 https://www.engineering.com/human-machine-interface-engineering/ Incorporating HMI and user interface in user-friendly and maintainable PLC control systems.

The post Human Machine Interface Engineering appeared first on Engineering.com.

]]>
An example of custom-developed HMI screen. (Image: Wayne Schaefer)

An example of a custom-developed HMI screen. (Image: Wayne Schaefer)

The base design of the modern Programmable Logic Controller (PLC) in automation has been around since at least 1968 when General Motors designed a specification for a ‘Standard Machine Controller’. This lead innovation and the eventual creation of the modular digital controller, better known as the Modicon 084. Incorporation of ‘ladder logic’ was significant, as it quickly became the accepted standard industrial machine programming language.  Regardless of other languages being adopted over the years, ladder logic has always been a favorite of controls engineers and technicians. Although the automation industry has seen many technological advances over the past few decades, however, the PLC continues to be the backbone for these systems.

When PLCs were first introduced, the interface between the operator and the machine primarily consisted of lights for status indication and pushbuttons or switches to convey commands from the operator to the machine. By the 1990’s, the first the Human-Machine Interfaces (HMI) started to garner some interest. The eventual inclusion of the HMI brought the importance of an intuitive user interface to the forefront of effective machine operation and maintenance. Unfortunately, good HMI design largely depends on the personal preference of the programmer. The first draft of an HMI specification was not released until 2015 with ISA’s first human-machine interface standard.

Symbiotic Relationship: PLCs and HMIs

A PLC is the brain of an automation system. The PLC program is tailored to make real-time decisions based on inputs and then executing corresponding actions. Switches and other various sensors allow the PLC to determine machine status and monitor process variables. Outputs from the PLC allow actions to be taken based on a set of rules corresponding to the specific sequence. Due to the physical nature of processes, the monitoring and controls of machine states and actions can be defined and predictable.  An HMI, on the other hand, is the part of this system and that must communicate with the human element and is therefore less well defined. The HMI offers a visual representation of a process and allows human operators to interact, oversee and intervene when necessary.

Modular and Hierarchical Design for HMIs

From a technical perspective, an HMI application’s effectiveness lies in its ability to present the vast amounts of available data in a meaningful manner . By adopting a modular and hierarchical design, developers segregate processed data into “chunks,” allowing users to start from a general overview of the system and then drill down to reveal detailed process information and the current machine state. However, just creating a user interface with a logical arrangement of the necessary inputs and outputs may not be the best solution. Designs, although technically correct, could overburden the operator with a cluttered display, incorrect or overuse of graphics and colors or even non-intuitive placement of buttons and indicators.

Humans Elements: Operators do not Make Decisions Based on Logic

One mistake designers make when creating interfaces is not integrating insights on emotional decision-making into HMI design. Studies have demonstrated even the most logical decisions can end up being emotional.  Creating an intuitive, user-centric system that aligns with natural human processes can have a positive influence on training, operations, useability and operator retention. For example, by understanding that humans naturally relate to certain shapes, colors and movements, HMI developers can correctly use emotionally expressive interface elements.

Rounded shapes, for instance, are often perceived as more friendly and safe, while sharp, angular shapes might indicate caution or danger. Colors also have strong relationships with emotional triggers. Color-coding and logical layout design play pivotal roles in HMI effectiveness. By assigning specific colors to certain states (e.g., red for emergencies), developers can ensure instant operator recognition. Moreover, a clutter-free, logically segmented layout ensures optimal performance.

The Language of Symbols: Adopting Universally Recognized Icons

Consistency in symbology across the HMI can drastically reduce the learning curve for operators. By using universally recognized icons and symbols, operators can intuitively understand processes, reducing training time and potential errors.

Solution: Interactive Diagnostic Tools and Feedback

Feedback is central to any control system. While PLCs manage feedback at a process level, HMIs do so at a human level. By visually confirming user commands and displaying the real-time system states, HMIs can reinforce operator confidence and ensure command continuity. Remember that the HMI does not have to be just a passive display tool. Modern HMIs come equipped with interactive diagnostic utilities. For example, when an alarm is triggered in the PLC, the HMI can guide the operator through a troubleshooting process, even suggesting potential remedies. This can be done within a PLC subroutine or even by running scripts on the HMI itself.

Solution: Customizable User Profiles on HMI

Not all operators require access to all system functionalities. Modern HMIs allow for customizable user profiles depending on who is logged in thus ensuring that operators only access the features relevant to their roles. This not only enhances security but also declutters the UI. This type of design can reduce the possibility of an operator either getting confused after discovering a screen designed for maintenance personal or an operator accessing maintenance functions they have not been trained to use.

Solution: Integrating Voice Commands and Augmented Reality

The real future of HMI lies in integrating cutting-edge Artificial Intelligence (AI) technologies. Voice commands allow hands-free operation, while Augmented Reality (AR) overlays crucial data on real-world machinery, offering an immersive troubleshooting experience. Leveraging the AI component can allow the user to have a conversation with the interface no different then asking a human “what’s wrong with the machine?” AI can also direct the operator to the next step in the sequence or even detect if the operator tries to invoke a command out of the normal order. This type of design is already being tested for in-cabin automobile systems where the driver no longer has to look at or touch the interface.

User Acceptance Testing: Simulating Real-World Scenarios

User feedback is the cornerstone for iterative HMI development. By regularly gathering end user feedback, developers can continually refine the interface, ensuring it aligns with evolving needs. During development, HMI operation can be tested and refined by using simulated industrial environments. This pre-emptive measure can help in identifying potential UI issues or data representation inaccuracies. In post development, HMIs should be rigorously field tested with each of the stakeholders.

Documentation and Annotations

While PLC programs have comments embedded in code, HMIs offer hover tooltips, help sections, and interactive tutorials. Such features make the onboarding process smoother and serve as quick references during operations.

Seamless System Integration 

The union of PLCs and HMIs is one of the pinnacles of modern industrial automation. While PLCs drive decision-making processes, HMIs make these processes transparent, interactive, and intuitive for human operators. By holistically designing PLC programs with their HMI counterparts in mind, developers can ensure seamless system integration, ease of diagnosis, and optimal maintainability. As technology continues to evolve, the boundaries between human operators and machines will blur, with HMIs acting as the bridge to this symbiotic future.

The post Human Machine Interface Engineering appeared first on Engineering.com.

]]>
Mitigating Cyber Attacks on Manufacturing Automation Assets https://www.engineering.com/mitigating-cyber-attacks-on-manufacturing-automation-assets/ Tue, 29 Aug 2023 09:44:00 +0000 https://www.engineering.com/mitigating-cyber-attacks-on-manufacturing-automation-assets/ PLCs, HMIs and other systems have become easy targets for motivated cybercriminals. Here are some tips on how to prepare for them and a look at some PLCs with built-in cyber security.

The post Mitigating Cyber Attacks on Manufacturing Automation Assets appeared first on Engineering.com.

]]>
(Image: Siemens)

(Image: Siemens)

Historically, manufacturing systems have been protected by either being isolated from the outside world or by connection to IT-managed networks already protected by firewalls and antivirus software.  However, as the manufacturing sector undergoes a digital evolution, the production floor has become more susceptible to attack. Luckily, over the past decade or so, PLC manufacturers have made strides in efforts to protect PLCs, HMIs and other systems from hackers and malicious software. Given the forecast by Cybercrime research firm Cybersecurity Ventures that cybercrime damages will reach of $10.5 trillion per year by 2025, the need to protect industry has become an absolute necessity.

Understanding the Cyber Threat Landscape in Manufacturing

Modern manufacturing systems are sophisticated blends of PLC control systems, information technology (IT) and operational technology (OT). Each component plays a critical role in maintaining efficiency and productivity, and attacks on any one of these can be devastating. The trend of interconnecting legacy systems with modern technologies opens up an array of new potential points of entry for cybercriminals. Since these components often have inadequate security measures which can be exploited by cyber attackers, other layers of security must be employed.

In the face of these threats, cyber resilience has emerged as a central theme in cybersecurity strategy. Cyber resilience is the ability of an organization to withstand, recover from and adapt to cyber attacks. This resilience is a combination of many strategies that prevent or minimize the impact of a cyber breach, maintain critical operations and ensure swift recovery.

The Human Element in Cybersecurity

While technical measures form the backbone of any cybersecurity strategy, the human element is equally crucial. Social engineering attacks—where attackers trick people into providing confidential information—are a common form of cyberattack and are often remarkably successful. Providing suitable cybersecurity training helps employees understand the threats they face and how their actions can impact the company’s cybersecurity.

This involves making cybersecurity a key part of the company culture, with clear and consistent communication from leadership about its importance.

Strategies for Mitigating Cyber Attacks:

1.      Risk Assessment and Management: Regardless of the systems used, one of the first steps towards creating a cyber-resilient manufacturing environment is conducting a comprehensive risk assessment. Identify and assess potential risks that can be exploited, ranging from weak passwords and outdated software to unsecured network connections and physical access controls. These risks can be reduced by using a set of good networking practices such as implementing encryption, two-factor authentication, regular software updates and continuous monitoring.

2.      Employee Training: Regular, effective and up-to-date cybersecurity training for all employees is essential. Such training could cover topics from identifying and avoiding phishing attempts, practicing good password hygiene, to understanding the importance and processes of software updates.

3.      Layered Defense: Manufacturers should adopt a layered defense approach, also known as defense in depth. This approach involves the deployment of a series of defensive mechanisms such as firewalls, intrusion detection and prevention systems, encryption protocols, and regular system audits. Having multiple layers of security measures significantly decreases the likelihood of a successful breach.  Dividing the PLC and computer networks into subnetworks, or segments, improves network security and performance. If an attacker compromises one group of equipment, the breach won’t spread through the entire network.

4.      Legacy systems: Older equipment may need extra layers of security that could include limiting physical access to connection ports. Alternative methods of collecting the required information could also be explored such as using an intermediate data collection PLC as an information gateway. In a manufacturing environment, physical security and cybersecurity should not be seen as separate entities but should be seamlessly integrated. Surveillance systems could be used both as a deterrent and provide valuable information in the event of a breach.

5.      Incident Response Planning: Having a well-structured and rehearsed incident response plan can dramatically reduce the damage and recovery time in the event of a breach. Such a plan should include defined roles and responsibilities, communication protocols, steps for isolating affected systems and processes for system recovery and post-incident analysis. Part of this includes maintaining effective backups that can be reloaded in the event of a breach.

6.      Monitoring and Improvement:Cyber threats evolve continuously, which means static security measures are inadequate. A system for constant monitoring and updating of security measures should be in place. This should involve keeping up to date with the latest cybersecurity news, threat intelligence and technological advancements. PLC programs can be automatically audited for unauthorized modifications by comparing to secure backups. Any anomaly can generate an alert facilitating an immediately response.

7.      Supply Chain Security: To mitigate supply chain attacks, manufacturers need to extend their cybersecurity efforts to their suppliers. This includes conducting cybersecurity audits, collaborating on security best practices and drafting contractual requirements regarding cybersecurity measures.

Remember, the best security strategy usually involves a combination tools and techniques, customized to the needs and risks of the specific manufacturing environment.

PLC Security

While the concept of built-in cybersecurity in PLCs is recent and is still evolving, some companies have started to integrate basic security features into their PLCs to address this issue. Here are a few examples:

1.      Siemens S7-1500: Cybersecurity features in the S7-1500 PLCs include access protection—programming devices and HMI panels require user-specific authorizations to connect; communication integrity, where data is protected from manipulation during transmission using encryption and message authentication codes. Even PLC to PLC and PLC to HMI communication requires devices log into each other closing an otherwise wide-open access path.

2.      Rockwell Automation ControlLogix 5580: These controllers include a suite of security features, such as role-based access control, digitally-signed and encrypted firmware, change detection, logs, and auditing security features, as well as IP and MAC address protection.

3.      Schneider Electric Modicon M580: features such as integrated cybersecurity, ethernet encryption, and Achilles Level 2 certification, an industry recognized cybersecurity certification that indicates a high level of protection against known cyber threats.

4.      Honeywell ControlEdge PLC: Secure boot prevents unauthorized firmware uploads, a secure default state to enhance security right out of the box and robust user controls to manage access.

5.      ABB AC500-S: Cybersecurity features that include user management, role-based access control, and a firewall. It’s designed to be compliant with the IEC 62443 standard, an international cybersecurity standard for industrial automation and control systems.

Please note that while these PLCs have built-in cybersecurity features, they are not immune to all cyber threats. Comprehensive network-level security measures, following best practices, and regular updates and patches are essential to maintaining a secure environment. It’s also important to work closely with PLC vendors and cybersecurity experts to fully understand the features, limitations, and best use cases of each PLC.

The post Mitigating Cyber Attacks on Manufacturing Automation Assets appeared first on Engineering.com.

]]>
The Rising Importance of PLC Cybersecurity: An Essential Look into Industrial Vulnerability https://www.engineering.com/the-rising-importance-of-plc-cybersecurity-an-essential-look-into-industrial-vulnerability/ Thu, 20 Jul 2023 12:54:00 +0000 https://www.engineering.com/the-rising-importance-of-plc-cybersecurity-an-essential-look-into-industrial-vulnerability/ PLC cybersecurity isn't just an essential precaution; it has become a vital element in the manufacturing landscape.

The post The Rising Importance of PLC Cybersecurity: An Essential Look into Industrial Vulnerability appeared first on Engineering.com.

]]>

Inside the Control Rods Operation Room of a nuclear power plant. Image: Zelle-power CC-BY-SA-4.0

Inside the Control Rods Operation Room of a nuclear power plant. Image: Zelle-power CC-BY-SA-4.0

The digital transformation of goods and services has continued to reshape various aspects of our lives. Industries like manufacturing and automation are no exceptions, with hardwired relay logic control systems upgraded to electronic Programmable Logic Controllers (PLC) as the primary control systems. Each advancement providing advantages in manipulating multiple automated processes to enhance speed, efficiency and, more recently, communication of large amounts of data for analysis. As the industries venture further into the age of Industry 4.0, these computer systems have inadvertently exposed industry to a higher risk of cyber threats due to increased network connectivity. This evolution has created the need to provide not just computer and server cybersecurity but PLC’s as well. These changes are becoming urgent and must be embraced if we want to maintain industrial operational continuity and ward off disastrous disruptions.

The drive towards more and enhanced interconnectedness in today’s manufacturing facilities means that PLCs, HMI’s, and SCADA systems that either function independently or on isolated plant floor networks have become part of larger, interconnected Industrial Control Systems (ICS). This integration improves operational efficiency and reduces costs, but it also introduces a new set of vulnerabilities that can be exploited by cybercriminals. This expanded industrial digital landscape highlights the increasing importance of PLC cybersecurity.

PLC Systems: New Attack Vectors

PLC’s have never been designed with security in mind. Anyone with the skills and equipment could upload, download, delete or modify programs. The security was assumed by the physical isolation of the controllers, which are typically mounted inside industrial control panels near the machines they control. Even as PLCs became interconnected, security was managed by ensuring the manufacturing network was separate (air-gapped) or firewalled from the outside world.

Now, as we transition towards a data-centered world with highly networked industrial environments, modern PLCs have become a potential target for cyber threats. The shift   towards a connected operational model has changed attack vectors, giving cybercriminals new avenues for disrupting, damaging or manipulating PLC operations across all industries and platforms. The push for Industry 4.0 must take into account this dramatically changing landscape.

Cybersecurity threats targeting PLCs have become increasingly sophisticated and impactful in the past decade. Here are some notable instances of successful cyber-attacks on PLCs:

1. Stuxnet (2010):

In perhaps the most well-known example, the Stuxnet worm targeted PLCs used in Iran’s nuclear facilities. It was designed to exploit specific vulnerabilities and manipulate Siemens’ PLCs responsible for controlling the speed of centrifuges used to enrich uranium. The worm caused the centrifuges to spin too fast, leading to physical damage while simultaneously providing false feedback to the operators. This incident had significant geopolitical implications and illustrated the severity of PLC-related cyber threats.

2. BlackEnergy (2015):

In 2015, a malware strain known as BlackEnergy was used in a cyber-attack on Ukraine’s power grid, causing a massive blackout. The attackers used spear-phishing emails to infiltrate the ICS and installed the BlackEnergy Trojan.. The malware gained control over the Human-Machine Interface (HMI), which was communicating with PLCs. The PLCs were then manipulated to disrupt the power supply, resulting in approximately 230,000 people without power for several hours.

3. Industroyer/CrashOverride (2016):

Industroyer, also known as CrashOverride, was used in a cyber-attack on Ukraine’s power grid in 2016, marking the second attack on Ukraine’s power infrastructure in two years. Industroyer was designed to target PLCs and protection relays used in electric substations. Unlike most malware that target higher-level control systems, this malware was crafted to target the lower-level industrial protocols that PLCs use to communicate, showcasing an evolution in PLC attack vectors. Once infecting a system, the code would lay dormant until activated by a specific event or time.

4. TRITON/TRISIS (2017):

TRITON, also known as TRISIS, targeted Safety Instrumented Systems (SIS) and was unleashed on a petrochemical plant in Saudi Arabia in 2017. The SIS is a type of ICS used to monitor the state of the process under control to bring it to a safe state in case of abnormal conditions. TRITON manipulated the instructions in the SIS, with an attempt to cause physical damage to the plant and potentially harm the plant operators.

These examples demonstrate that PLCs can be attractive targets for hackers intending to cause physical damage, disrupt essential services or make geopolitical statements. They highlight the need for comprehensive and effective PLC security measures to protect industrial control systems. It should be noted that the Stuxnet virus was a wakeup call, resulting in a panicked attempt to lock-down and protect many manufacturing facilities, especially those where security depended on PLC or network isolation (air gaps) as part of the protection model. Also, although BlackEnergy was used in 2015, the malware was first reported as far back as 2007.

Detecting these viruses has proven to be very difficult. In the case of Stuxnet, it took months to find and unravel even the basics of its code, while Triton was discovered due to a bug in its operation. Stuxnet had already done a lot of damage by the time it was discovered while Triton managed to expose itself before any real harm was done.

Financial and Safety Implications

The financial implications of a successful cyberattack on a PLC system can be catastrophic. Unplanned downtime due to a cyber incident could lead to significant production losses, which directly impact a company’s bottom line. Moreover, the cost of remediation, system hardening and potential regulatory fines could also be astronomical. According to a survey of 900 companies from Trend Micro Inc, the average cost of an ICS breach in 2022 was approximately $2.8 million with 89 percent reporting some sort of attack over the past 12 months. Other reports suggest most companies are ill-equipped to prevent (or even detect) an attack, even though implementing robust PLC cybersecurity measures is an essential investment to mitigate these potential risks.

Additionally, safety is paramount in industrial environments and a compromised PLC could pose severe threats to life and limb. For example, manipulating a PLC that controls chemical mixtures could result in harmful spills or even explosions. BBC News reports suggest a cyber attack on a steel mill in Iran, operated by a Siemens process control system, resulted in severe damage to equipment but could have easily caused human injuries as well. Therefore, PLC cybersecurity is not just about preserving system integrity—it’s a critical component for ensuring the safety of workers and the public.

Regulatory Requirements

Several regulatory bodies recognize the increasing risk to PLC systems and have enacted laws and regulations mandating certain cybersecurity measures. For example, the North American Electric Reliability Corp. (NERC) regulates and enforces specific standards for the cybersecurity of industrial control systems in the power sector. This set of standards is referred to as NERC Critical Infrastructure Projection (NERC-CIP). Non-compliance can result in hefty daily penalties, making cybersecurity crucial from a regulatory perspective.

A Necessity, Not an Option

In an era where technology continues to rapidly transform industries, cybersecurity has become a pressing concern. Its growing importance cannot be overstated, considering the potential financial, safety, and regulatory implications of a breach. As PLCs become more integrated with other systems, their vulnerabilities will only increase, providing cybercriminals with more opportunities for exploitation.

Businesses must adopt robust cybersecurity strategies to protect themselves. These strategies could include system hardening, regular security audits, employee training, and the implementation of intrusion detection systems. Air-gapping and other isolation technics are no longer sufficient protection.

As we stride towards a more data-driven future, PLC cybersecurity isn’t just an essential precaution; it’s a vital element in ensuring the resilience, integrity and longevity of our increasingly interconnected industrial systems.

The post The Rising Importance of PLC Cybersecurity: An Essential Look into Industrial Vulnerability appeared first on Engineering.com.

]]>
De-risking Automation Deployment: A Strategic Guide https://www.engineering.com/de-risking-automation-deployment-a-strategic-guide/ Mon, 19 Jun 2023 10:34:00 +0000 https://www.engineering.com/de-risking-automation-deployment-a-strategic-guide/ The deployment of automation technology has its own unique set of potential failures and risks—here’s how to plan ahead.

The post De-risking Automation Deployment: A Strategic Guide appeared first on Engineering.com.

]]>
(Image: Kuka Group)

(Image: Kuka Group)

Automation is an integral part of the modern manufacturing environment and with it comes increased efficiency and productivity. However, as more industries embrace Industry 4.0 technologies, there is a substantial increase in risk from cost overruns, project delays, workflow disruption and security threats.

Understanding the Risks

Automation projects need to be assessed for risks at the design stage and reassessed during build and deployment. The key to creating a good risk-based decision-making process is to identify the potential risks involved—this risk identification can be divided into five categories. These categories are flexible and can be modified to suit a particular industry. Grouping risks can help define specification strategies, as one solution could affect more than one risk.

Operational Risks:  System malfunctions or underperformance can be disruptive to throughput, effect production quality and lead to output delays. These risks are inherent in the deployment of any new technology, and automation is no exception.

Financial Risks:  Automation technology typically requires substantial upfront investment. Any failure or underperformance can lead to significant financial loss. This can also be coupled with technical risks if there are application errors or misapplied technology that leads to re-engineering and replacement of components.

Technical Risks:  Technical glitches, software bugs and inadequate system integration pose a significant threat. This includes failures to properly interlock between different systems or take into account data handling and integrity. Cybersecurity risks are a major concern because the connected nature of automation systems makes them vulnerable to data breaches and other cyber threats.

Strategic Risks:  Misalignment between automation deployment and the organization’s long-term business strategy can lead to inefficiencies and suboptimal outcomes. It is critically important to understand the link between business goals and the technical architecture chosen to meet them.

Change Management Risks:  Automation deployment invariably involves change, which could face resistance from employees. Inadequate training or preparation can also lead to bottlenecks, impacting the overall effectiveness of the deployment.

Accidents are Unavoidable

It should also be noted that, in complex systems, an action or series of actions that can cause failures may not be obvious. Mapping based on Perrow’s Normal Accident Theory may be a necessary step during the risk planning stage. Normal Accident Theory suggests that complex systems have an intrinsic property that drives toward an eventual catastrophic conclusion. It is therefore important to take a systems or holistic approach that encompasses not only equipment, but the human element as well. Interactions and the coupling or path between these events can be mapped while looking for patterns and paths to failure. Paths can be linear or complex depending on the system.  Strategic risks can more easily be recognized by the interaction of causal links.

De-risking Strategies and Risk Management Framework

It is important to implement any tool as part of a comprehensive risk management framework. Identifying risk is one of the first step, however this framework should encompass risk identification, risk assessment, risk response planning, risk monitoring and control. Regular reviews can ensure the risk management strategies are effective and updated as needed. Here are some strategies that can be used to de-risk an automation deployment.

·         FMEA:  Failure Mode and Effect Analysis is a popular tool set used throughout many industries. These can be done at the design stage (D-FMEA), the build or production stage (P-FMEA) and even look at the systems involved (S-FMEA). A FMEA analyzes not only identifies the potential effect of the failure but also assign probabilities for occurrence of the potential failure, the ability to detect the failure and then seeks ways reduce any of the three. FMEA session are done by groups of experienced professionals due to their subjective nature. FMEA is a tool that lends itself nicely to the idea that incident probabilities can not be reduced to zero but can be mitigated with proper control strategies.

·         Pilot Testing:  Pilot testing is a valuable strategy that allows organizations to test ideas before committing too many resources. By implementing automation on a smaller scale or in a simulated environment initially, organizations can identify unforeseen issues, assess the system’s effectiveness and make necessary adjustments. This approach helps organizations avoid the disruption and costs associated with a full-scale rollout that might encounter issues.

·         Phased Implementation:  Phased implementation is a strategy that emphasizes gradual deployment but at a higher level than pilot testing. By moving the project forward in increments engineers learn, adjust and improve as they move along the automation journey. The benefit is that issues only affect a small part of the operation, minimizing overall operational and financial risk.

·         Leveraging Data and Analytics:  Data is a powerful tool for de-risking automation. Real-time monitoring of automation systems through analytics provides valuable insights into system performance and help identify and address issues early. Predictive analytics can also be used to forecast potential future issues, allowing for proactive problem-solving. Current software employs AI technologies to deal with large data at a scale previously impossible to achieve.  AI tools can now predict critical maintenance activities with high levels of confidence. Containment event trees, for example, have become so complex the analysis can only be accomplished by embracing AI technology.

·         Cybersecurity Measures:  Modern automation systems, due to their interconnectedness, are vulnerable to cybersecurity threats. Implementing robust cybersecurity measures is crucial. These could include intrusion detection and prevention systems, firewalls, secure system design, regular vulnerability assessments and employee training on cybersecurity best practices.

·         Vendor Due Diligence:  The selection of the right technology and vendor is a significant factor in a successful automation deployment. Conducting thorough due diligence ensures the chosen vendor has the experience, expertise, reputation and support capabilities. Reviewing their previous project outcomes and getting feedback from their past clients can provide valuable insights.

·         Thorough Planning and Forecasting:  Planning and forecasting can significantly reduce the strategic and financial risks associated with automation deployment. Organizations should carefully analyze the impact of automation on their existing processes, personnel and systems. Detailed cost-benefit analyses provide a clear view of expected return on investment. Scenario planning can prepare organizations for all the possible outcomes.

·         Training and Communication:  Effective change management strategies, including comprehensive training and clear communication, can mitigate the risks associated with employee resistance and inadequate preparation. It’s essential to communicate the objectives and benefits of automation clearly to all stakeholders, addressing any concerns and instilling a sense of ownership and acceptance. Additionally, a well-designed training program can equip employees with the skills needed to operate the new automation systems effectively.

·         Continuous Improvement and Support:  Risk reduction is not an activity that is only completed once. It’s an ongoing process that must be revisited and updated throughout all phases of a project. Continuous support, maintenance and system improvement are critical to the long-term success of an automation deployment. Regular system audits identify potential issues early and a strong support team can resolve them promptly to minimize disruption. Additionally, a continuous improvement approach can keep the automation up to date with evolving business needs and technological advancements.

While automation brings immense potential for efficiency gains and cost savings, it also comes with inherent risks. However, with a strategic approach these risks can be effectively managed. This comprehensive approach to de-risking will not only safeguard businesses against potential pitfalls but also set them on a path to sustained growth and competitive advantage in the digital era.

The post De-risking Automation Deployment: A Strategic Guide appeared first on Engineering.com.

]]>
Vision Systems: How to Align Business Goals with System Limitations https://www.engineering.com/vision-systems-how-to-align-business-goals-with-system-limitations/ Mon, 29 May 2023 09:26:00 +0000 https://www.engineering.com/vision-systems-how-to-align-business-goals-with-system-limitations/ Unleashing the full potential of vision technology in automated systems requires effective design, testing and collaboration.

The post Vision Systems: How to Align Business Goals with System Limitations appeared first on Engineering.com.

]]>
(Image Source: Sick AG)

(Image Source: Sick AG)

Today’s industrial machine vision systems use high-resolution cameras, image processing algorithms, and artificial intelligence to enhance quality control, automate inspections and optimize production processes with precision and efficiency.

The implementation of vision systems spans applications like 2D barcode scanning, pattern and character recognition, measurement and error proofing. Over the years, the introduction of these systems for use in an industrial setting has been the subject of much debate.

Although there is potential for vision to enhance many aspects of manufacturing and assembly operations, the practical execution of these systems often falls short of the expected goals. The misapplication of vision tends to result from a failure to align business goals with technical goals, leading to architecture that does not meet the needs of the organization.

Understand the Purpose and Expected Outcomes

It’s important not to lose sight of why a vision system was selected in the first place. A business goal of zero defects to the customer is one path that leads to vision as the technical solution. However, recognizing that there could be other solutions is important if the initial vision solution does not deliver. In this example, the goal of zero defects would require a vision success rate of 100 percent, but a first-time success rate of 100 percent is extremely difficult to achieve in an industrial environment.

This disconnect between business goals and the resulting system architecture can become a major source of hardship for the implementation team. I have spoken to plant managers who have expressed regret over allowing cameras into their facilities as the actual camera failure rate caused an unacceptable amount of download time and lost production. It is important to note that dissatisfaction was not a result of the quality or purpose of vision systems, but from a lack of understanding of the limitations of the technology combined with unrealistic expectations of performance.

The Importance of Effective Implementation

The treatment of these systems as a general project requirement as opposed to meeting specific business or technical goals creates difficulties for OEMs. Tight schedules can create a hurried development environment, resulting in poor specifications and incomplete testing.  Since many OEMs lack specialized vision expertise and rely on supplier support, implementation without proper reliability or repeatability testing is a serious risk. Consequently, substandard systems require significant rework before they can effectively support production once installed at the customer site. The system will then fall short of the actual business requirements, lacking in functionality, robustness or maintainability. Also, operator acceptance procedures and programming are frequently left for the customer to develop, which compromises overall system effectiveness and robustness. The responsibility of testing and final implementation can end up falling on the controls and process engineer, leading to re-engineered applications and unanticipated problems that require extensive troubleshooting.

The Role of Design, Testing and Optimization

The success of vision systems relies on proper up-front design, testing, application optimization and detailed requirements documentation. Despite engineers’ efforts to find cost-effective solutions that meet requirements, poor execution at the OEM level can undermine those efforts. High-quality cameras alone cannot compensate for suboptimal design and testing processes. Extensive fine-tuning is necessary after installation, but due to time constraints this stage is often rushed, leading to frustrations and setbacks. This rushed approach contrasts with the need for gaining a comprehensive understanding of the functionality and performance of the vision systems and their limitations. User Acceptance testing should include detailed simulation, testing and analysis of results. Risk assessment should precede any modifications to camera parameters after buyoff, and these modifications should only be implemented after thorough vetting.

The Dilemma: Can a reasonably priced vision system be implemented with a high success rate, good maintainability, easy adjustability and programmability? The answer is: that is depends on how the alignment between business goals and technical ability is understood. Misapplication of actual requirements that fall outside the capability of the technology can be catastrophic. A fault tolerant, error free system can be achieved, but comes with a cost in terms of time and financial investment that some manufacturers may hesitate to commit.

Camera Limitations: Another vital aspect to consider is the understanding of failure rates. Adjustments made to a vision system with the aim of enhancing performance may take weeks to prove effective, even at failure rates as high as 1 in 10,000 pieces. Any modification could unintentionally increase the number of rejects, and it is only after running thousands of parts that the impact of any modification can be adequately evaluated.

A common application is the use of cameras to read barcodes commonly used for tracking. Fixed cameras can never attain a 100 percent read rate, resulting in undesired production line stoppages. Obstruction (dust, dirt, coolant etc.), poor lighting, camera misalignment, or damaged barcodes can also cause inaccurate readings. To compensate for these challenges, backup handheld scanners are often deployed at various stations. While smartphones are readily used to point to the camera as the problem, the real issue often lies in the image quality itself. Simulation is a valuable tool that can be used behind the scenes to validate solutions before implementation or after a change, but simulation cannot entirely replace live testing. Therefore, any tweak to a camera should prompt the implementation of a bypass until the new parameters are thoroughly vetted.

Balancing Cost and Performance

Creating a vision system with a high success rate, straightforward maintenance and flexibility is feasible. However, company leadership must understand that the benefits of well-designed, easily adjustable and programmable systems are critical to ensure both business and technical needs are met. By striking a balance between cost, performance and the system’s ability to adapt and remain maintainable, manufacturers can still minimize downtime, optimize production processes and enhance overall operational efficiency.

Implementing vision systems in industrial environments presents a number of challenges that must be overcome for successful integration. Effective implementation requires careful consideration of factors such as design, testing, optimization, and alignment of expectations. The role of OEMs in delivering high-quality, fully functional systems cannot be understated, and close collaboration between OEMs and plant controls engineers is essential. While limitations exist, especially in barcode cameras, a proper understanding of the system’s purpose and limitations can aid in identifying alternative solutions or mitigating risks. By acknowledging the importance of thorough testing, comprehensive analysis of failure rates, and the need for adaptability, manufacturers can pave the way for the successful implementation of vision systems. The journey towards achieving high-performing vision systems is probably longer than you plan for at the start of a project, but the long-term benefits in terms of productivity, quality, and operational efficiency make it a worthwhile endeavour.

The post Vision Systems: How to Align Business Goals with System Limitations appeared first on Engineering.com.

]]>
Develop ‘Soft Skills’ to Enhance Your Engineering Career https://www.engineering.com/develop-soft-skills-to-enhance-your-engineering-career/ Mon, 24 Apr 2023 12:56:00 +0000 https://www.engineering.com/develop-soft-skills-to-enhance-your-engineering-career/ It takes more than technical proficiency to navigate management and workplace culture to create a winning work environment

The post Develop ‘Soft Skills’ to Enhance Your Engineering Career appeared first on Engineering.com.

]]>

Engineers face plenty of challenges throughout their careers and can sometimes lose sight of the human element of engineering. In fact, this is a recognized concern—a 2021 article published by the Institute of Education Sciences lamented the lack of soft skills among engineers and engineering graduates.   

These soft or “people” skills are just as important as technical competence and, in some cases, even more so. In most environments, engineers must work collectively in teams as collaborators or team leaders. Communicating both simple and complex technical ideas to other personnel needs to be done carefully and effectively. When faced with the inevitable misunderstanding or conflict, the engineer is often faced with roadblocks that inhibit moving forward. Understanding who we are as engineers and how to interact with others under a variety of circumstances can go long way in ensuring work gets done correctly and efficiently and is a key ingredient in career advancement.

The next three books listed here have proven to be helpful as a part of my personal growth over the years and have allowed me to better use my technical talent to achieve goals throughout my career. Although embarking on a journey of self-discovery and improvement can be undertaken at any age, students should at least have some exposure to these concepts before entering the workforce. Graduates and seasoned engineers alike can benefit from balancing their technical and leadership skills. Complementing self-training with professional courses is a great way to learn and improve soft skills.

Conversation Skills

Conflict is inevitable and how an engineer leads and navigates the conversation is crucial for anyone tasked with designing, maintaining and troubleshooting factory systems and new products. The book “Crucial Conversations” by Kerry Patterson, Joseph Grenny, Ron McMillan, and Al Switzler is an excellent read. As well, attending a course or skills workshop, based on this material, can be a vital step. This book teaches skills that promote good communications that can help solve conflict or avoid it altogether. Engineers and engineering managers need to focus on common goals and shared interests to ensure conversations are meaningful and productive. Many world-class companies have offered this course to all their employees to create more effective work environments. The recognition that employees are naturally diverse goes a long way in fostering trust within a company.

Crucial Conversation is not a book just about dealing with conflict but about how to engage in conversations that avoid communication traps. Looking for common ground and listening to what others have to say creates a safe environment to conduct business, interact with other engineers, or when dealing with design issues on the factory floor. Like other soft skills, the ability to avoid or diffuse conflicts can be a huge advantage in the boardroom and production office.

Understanding Your Team

The Myers-Briggs Type Indicator (MBTI) assessment is an effective tool to help individuals understand their personality preferences, work style and communication tendencies. Although the assessment is typically done by trained professionals, there are a variety of online tools and texts available to get things started. For an engineer to communicate effectively, understanding oneself is just as important as understanding others. This assessment can also be used to highlight possible areas of self-improvement.

Some experts may challenge MBTI effectiveness, however in my personal experience is, it is an effective tool. Myers-Briggs seeks to define personality using the following traits in 16 combinations:

  • Introvert or Extrovert 
  • Sensing or Intuitive 
  • Think or Feeling 
  • Judging or Perceiving

After testing, I learned I had to strive to become more outgoing. After making a concerted effort, I was promoted to engineering manager. A significant achievement that I am confident would not have occurred without this insight and guidance. As a result of knowing myself and learning to be more understanding and flexible with my interactions, I was asked to mentor some of the more difficult engineers and take on a challenging department because plant management had confidence that I was capable of functioning in difficult management environments.

Personality Types

The second book in this series references these personality types. “Type Talk at Work: How the 16 Personality Types Determine Your Success on the Job” by Otto Kroeger and Janet M. Thuesen, discusses how different personality types approach their career development and how they work. The authors also provide practical advice on how to use the MBTI results to improve communication skills leading to better collaboration and higher job satisfaction.

An understanding of the 16 personality types is only one step in the process. The authors emphasize that the key to managing others effectively is to manage yourself first. The next step is recognizing how each of the personality types can work with each other. People with certain personality types may excel when it comes to attention to detail and precision while others can be extremely creative but not able to define the practical application of their ideas. A good engineer can leverage the strengths of the people on a team to achieve success.

High-Performance Thinking

The third book, “Think Like an Engineer” by Mushtak Al-Atabi is a great selection for finding ways to develop problem-solving abilities once you have established your skills at type-watching and applying professional conversational techniques. Approaching issues in a more systematic, but still creative method, is a skill many engineers lack. This book helps engineers create a framework for high-performance thinking by providing practical examples and case studies. The author starts by building a foundation from which you and your team can promote a culture that supports performance expectations. Understanding how our brains are wired can also aid us in our development. Engineers can learn to avoid loopholes and biases by adapting the process of Conceive, Design, Implement and Operate (CDIO Model) to leverage our natural mental processes.

Building effective teams requires a good understanding of how to create and maintain good communication, collaboration and problems solving. This is the focus of part two of the book, which discusses how engineers need a solid understanding of their personality type and how to recognize types in others to be effective. Creating a culture of trust and transparency can increase team performance by orders of magnitude that far exceed the competition in many cases.

Developing soft skills is an absolute necessity if an engineer wants to successfully navigate management and workplace culture to create an efficient and safe working environment. Skills and techniques for design are enough for an engineer to be considered proficient. But as your sphere of influence grows, more tools and techniques are needed to be effective and promote growth by breaking out of our comfort zone. There are significant stresses imposed on engineers in modern manufacturing.  It is now, more than ever, that we need to find ways to recognize personality types to work quickly and effectively while still engaging in challenging conversations.

The post Develop ‘Soft Skills’ to Enhance Your Engineering Career appeared first on Engineering.com.

]]>
How to Build an Engineer https://www.engineering.com/how-to-build-an-engineer/ Mon, 27 Mar 2023 09:36:00 +0000 https://www.engineering.com/how-to-build-an-engineer/ Encourage students to participate in Academic and Practical Skills competitions to bring out their STEM potential.

The post How to Build an Engineer appeared first on Engineering.com.

]]>

(Image Source: FIRST)

(Image Source: FIRST)

Although some studies have indicated an overall increase in the number of students pursuing degrees related to technology in North America, there remain serious gaps in participation from important demographic groups. To address the chronic shortage of capable technical workers in North America, increasing participation in technology-related careers is critical.

One way to promote science and technology is by encouraging students to participate in a variety of current STEM (Science, Technology, Engineering and Mathematics) activities that are available, such as science and engineering fairs and robotics competitions. These activities stimulate curiosity and provide young people with a unique opportunity to research, learn, experiment and develop the skills necessary to pursue careers in science and technology.

Being challenged with practical problems teaches students to not only apply their theoretical knowledge but to also develop critical skills as they gain exposure to the latest technology. With proper guidance, students can develop a passion for engineering, making science fairs and other competitions a valuable part the academic journey.

Tournaments

In Canada, the Ontario Provincial Championship of the VEX Robotics tournament ran on the weekend of March 4, 2023 at the Niagara Falls Convention Centre. Students from all over Ontario competed by building and programming autonomous robots to complete various tasks while overcoming a variety of obstacles. Watching the students compete always provides a great insight into the future engineering and technology leaders of the up-and-coming generation.

Another popular competition along the same lines as VEX is the global FIRST Robotics Competition (FRC) for grades 9 to 12 and FIRST Lego League (FLL) for grades 4 to 8. Both include local and international events where teams build and program robots to perform specific tasks on a game field.  A notable difference is that  the younger FLL Teams are expected to build and program their robots without major adult involvement (aside from some supervision and mentoring).

Skills Ontario, a program funded by both the federal and provincial governments, provides another opportunity for youth to be exposed to skilled trades and technology. Skills Ontario sends winning teams to WorldSkills. Along with general technology, students are encouraged and empowered to explore careers in skilled trades and are also given assistance in getting a start in those areas in the form of tools and resources. Skills Ontario has close links with many industry partners, which gives students access to potential co-op, temporary or permanent job opportunities.

All these competitions require students to apply their knowledge of engineering, programming, and controls to design and build a functional  machine or device. They are required to work as a team to coordinate and optimize the performance, gaining hands-on experience designing and building complex systems. This experience is invaluable for potential engineers, as they must have a deep understanding of many different complex systems, how they work and how to design and optimize these systems.

Science Fairs

Before computers, programming and robotics were main-stage events, science and engineering fairs and other similar competitions have always been an integral part of the academic journey. They offer a unique opportunity for young students to apply their theoretical knowledge to practical problems. In schools, students learn the fundamental theories of engineering, programming, and control techniques, however, it can be challenging to see how these theories are applied in the real world.

Application:

Participating in science fairs and other technical competitions can give young students a distinct advantage when studying to be an engineer. These include (but are not limited to):

  • Applying theoretical knowledge to practical problems
  • Developing critical skills
  • Access to the latest technology and tools
  • Building networks and gaining exposure to industry professionals
  • Developing passion for controls engineering

These challenging activities teach students important skills such as how to work in a team while solving real world and simulated problems. Students must learn that teamwork is essential to working effectively while openly communicating ideas and then applying them to solve the given problems. Some of these systems students must create are very complicated, which means research and seeking out advice from professionals working their respective fields.

Access to the Latest Technology and Tools

Science fairs and competitions provide an opportunity for students to gain exposure to the latest technology and tools. In competitions like FIRST Robotics, students are often shown cutting-edge technologies such as advanced sensors, microcontrollers and programming languages. By gaining experience with these technologies, students can develop a deeper understanding of how they can be used to design and optimize systems and will be better prepared to understand and use the next generation of languages and technologies that have yet to be invented.

Moving Forward

Science fairs and competitions spark interest in students and motivate them to pursue a career in the one of the STEM fields. These events provide students with an opportunity to explore their passion for engineering, showcase their work to potential employers, and receive recognition for their accomplishments. Networking can be invaluable when it comes to finding internships, job opportunities, and building relationships with industry professionals. For some students, these events can be life-changing and set them on a path towards a successful career in engineering and technology.

The post How to Build an Engineer appeared first on Engineering.com.

]]>
Avoid the Confirmation Bias Trap When Troubleshooting https://www.engineering.com/avoid-the-confirmation-bias-trap-when-troubleshooting/ Fri, 24 Feb 2023 11:16:00 +0000 https://www.engineering.com/avoid-the-confirmation-bias-trap-when-troubleshooting/ Engineers need to deploy the correct problem solving techniques to arrive at the best solution and avoid repeat occurrences.

The post Avoid the Confirmation Bias Trap When Troubleshooting appeared first on Engineering.com.

]]>

Let’s face it, manufacturing is complicated. Even the most advanced, well-managed factories will experience manufacturing problems on an ongoing basis. Some of these will be simple and have an obvious, quick solution while others will require rigorous exploration to uncover the root cause. One thing that shouldn’t change, however, is how engineers approach manufacturing, maintenance, design and shop floor problems.  

It’s vital to consider all the potential causes and solutions before arriving at a conclusion. As data is collected and analyzed, initial solutions can inadvertently be favored over more correct options. Referred to as “confirmation bias”, this involves knowingly or unknowingly favoring information that supports preconceived ideas while ignoring or discrediting data that contradicts them. It’s a common and often subconscious tendency, which makes it extremely important for engineers to be aware of this bias.  

Every effort must be made to consider the totality of perspectives, causes and solutions to arrive at the best outcome for a given problem while avoiding suboptimal results and continued or recurring malfunctions. The tendency to favor or fixate on one particular solution can only be overcome using solid problem-solving tools and methodology. 

Various tools can be deployed depending on the nature of the problem, the desired outcome and the resources available. One example, the Red X process, helps avoid confirmation bias in a variety of ways. Red X, developed by Dorian Shainin, refers to the dominant root cause shown as a color-coded red on a chart that prioritizes what is observed to have the greatest effect on the quality of a process (also known as a Pareto chart). 

This technique begins by recognizing a problem exists and must be solved. A trained, skilled facilitator can help guide and teach and many companies train key personal to lead the Red X process when problems arise. The focus should be on finding the root cause without the baggage of preconceived notions of what the problem or solution could be. The strength of this technique is that data is collected by testing a variety of hypothesis. The analysis of the data—using statistical tools—is used to identify patterns or trends that could, in turn, point towards a root cause. Multi-discipline teams are often used to provide multiple perspectives, forcing the team to keep an open mind throughout the process. 

Another simple, popular method is the 5 Why’s method of problem solving. Engineers often identify the root by pushing themselves to continually ask “why?” This is methodically done in steps: 

  • Define the problem 
  • Ask why the problem occurred (this should generate a reason). 
  • Ask why the reason existed that caused the problem (this should generate a second reason). 
  • Ask why second reason was caused and discover yet another reason. 
  • Repeat this process at least 5 times or until the actual root cause and reasons are found and can be addressed. 

Using the 5 whys helps avoid confirmation bias not only by challenging initial assumptions but by encouraging the problem-solving team to consider multiple potential causes. Continuing to ask “why” ensures the engineer looks deeper and considers alternative causes or factors that may not be obvious contributors to the problem. This approach ensures the root cause is identified which avoids just treating the symptoms of the problem. 

Real-Life example 

Let’s look at an actual industrial event from my career as an engineer to illustrate how bypassing data analysis can result in a repeat of the same catastrophic event. In this example, the second event occurred not long after the initial damage was repaired and new equipment replaced. It began when a transformer failure and subsequent fire shut down an entire vehicle engine plant as it was ramping up into production. The failure happened after protective relays tripped in succession all the way back to the main substation. After isolating the transformer and resetting the relays, the failure was quickly blamed on premature failure of the new transformer—most likely from a manufacturing defect—combined with a lack of fuse and relay co-ordinations by the contractor 

The only action taken was to update the fuse and relay coordination to ensure only the defective transfer would be isolated if another event occurred. Shortly afterwards, a new transformer arrived and was put into service.  

Looking closer at an overview of this sequence of events should have immediately raised red flags. Even the most basic data collection never took place. However, the engineers were surprised when, less than three months later, a second power transformer suffered the same fate although the new coordination settings prevented the entire plant from shutting down. 

A new investigation revealed that the transformer insulation had been slowly degraded over time due improper transient suppression. The transients were created when switching events occurred around the industrial complex the plant was located near. Had an investigation team been commissioned after the initial failure, they would have discovered the identified causes were unlikely to have triggered the short and fire in the first place. 

Confirmation bias happens all the time. An example in everyday life could be the transition to electric vehicles (EVs). Moving directly from one method of energy consumption to another seems to be the accepted approach. However, it can be argued that it is better to first focus on conserving energy before converting to EVs because it’s more effective and efficient to reduce energy consumption through conservation measures rather than simply replacing fossil fuels with electricity. This approach could offer a reduction in carbon emission while allowing more time to modify the global infrastructure to accommodate the conversion to electric. This example shows that often the conclusion is used to justify the solution instead of taking all the potential measures into account first. Also, jumping right to conversion bypasses more timely and beneficial steps that will now never be implemented. 

Confirmation bias does not preclude the finding of solution, especially in situation where more than one solution exists. Also, having a bias does not mean your solution is not the correct one. However, using good problem-solving methodology is always the best way to approach any engineering challenge, whether in troubleshooting or design. 

 

The post Avoid the Confirmation Bias Trap When Troubleshooting appeared first on Engineering.com.

]]>
Three Books Every Engineer Should Read https://www.engineering.com/three-books-every-engineer-should-read/ Sun, 22 Jan 2023 14:43:00 +0000 https://www.engineering.com/three-books-every-engineer-should-read/ Students and seasoned professionals alike can gain insight form reading a variety of books.

The post Three Books Every Engineer Should Read appeared first on Engineering.com.

]]>

Reading about the history of engineering gives engineers a better understanding of how various techniques and technologies have evolved over time and how they have been applied in the past to solve various problems. Engineers often face new and challenging problems and reading about how similar problems were tackled in the past can provide valuable perspective to solve current challenges.

Engineering in the Ancient World by J. G Landels provides examines the engineering advancements and the fascinating achievements of some of the oldest civilizations. The book covers a wide range of topics, including the development of the wheel and the creation of roads and bridges, as well as the use of water wheels and other machines for power. Engineering played a critical role in the development and success of these societies and their innovations continue to influence the world we live in today.

The modern Engineer can gain some appreciation for how engineering was a driving force behind the success of many ancient civilizations, as it allowed them to build structures, create systems for transportation and communication, and harness the power of nature. The book covers how different civilizations used engineering to adapt their environment and meet the needs of their people, very similar to what happens today.

The Greeks were known for their advances in architecture, including the construction of temples, theaters, and public buildings. They also made significant contributions to the fields of mathematics and engineering, which laid the foundations for modern science and technology. The Romans, meanwhile, were renowned for their engineering feats, including the construction of roads, aqueducts, and public baths. These developments were critical as they allowed the Romans to maintain a vast network of roads, communicate with their citizens, and provide clean water to their cities.

Reading about the history of engineering can help engineers appreciate the contributions of past engineers and the role that they have played in shaping the field as it is today. Engineers often need to explore the broader context in which engineering operates to help them solve the vast variety of problems faced today.

Understanding Human Error

When designing safe systems, the importance of anticipating the interaction between operators and the systems they interact with can not be overstated. The Field Guide to Understanding Human Error by Sidney Dekker provides insight into how and why humans make mistakes, and how these mistakes can be prevented or mitigated in the future. Engineers need to understand that, in many instances, the design of the system is the cause of the human error and that trained and knowledgeable people, faced with the same problem, will make the same mistake.

According to Dekker, human error is a normal and inevitable part of human performance, and it cannot be completely eliminated. This means the focus of design should be on how to create systems and processes that accommodate and recover from human error, rather than trying to prevent it altogether. Engineers and designers must embrace the concept of “resilience engineering,” which involves designing systems that can adapt and recover from unexpected events or failures. This includes designing for flexibility and adaptability, as well as having contingency plans in place to deal with potential problems.

Minimizing human error can be done through human factors engineering, which focuses on designing systems that are easy to use and understand. Providing clear instructions, using simple and intuitive controls, and considering the needs and limitations of the people who will be using the system is a large part of a successful design. Implementing robust processes and procedures, such as thorough training programs, checklists, and thorough testing of systems before they are put into use are some of the ways to reduce human error.

A key concept discussed in the book is the idea of “latent conditions,” which are the underlying causes of human error. These can include inadequate training, lack of resources, or even just poor communication within an organization. By identifying and addressing these latent conditions, organizations can reduce the likelihood of human error occurring.

This book is a valuable resource for those working in engineering as it provides a comprehensive overview of the causes of human error and offers practical strategies for designing systems that can accommodate and recover from these mistakes. By understanding and addressing the underlying causes of human error, organizations can improve the safety and reliability of their systems and processes, ultimately leading to better outcomes for all involved.

EveryDay Design

The Design of Everyday Things was written in 1988 by Donald A. Norman and was just recently updated. It discusses the importance of good design in everyday objects and how this can impact our lives. The concepts of good design are explored by taking a closer look at objects that we see everyday. One key theme of the book is the idea that good design should be intuitive and easy to use, and that bad design can lead to frustration and confusion. Good design is not just about aesthetics, but about creating products that are functional and user-friendly. Designers must consider the needs and limitations of their users and to create products that are accessible and usable by a wide range of people.

The book begins by discussing the concept of “affordances,” which are the properties of an object that determine how it can be used. Norman argues that good design should make it clear how an object can be used therefore minimizing the possibility of error or confusion. Using examples as common as a door handle helps the reader visualize and understand the point being made. How an object is used can be guided by “constraints,” which are the limitations that guide our actions and help us to understand how to use an object. A good design should provide the right constraints to make it easy for people to use an object, while also allowing for flexibility and adaptability. Throughout the book, Norman uses examples from a wide range of everyday objects, including kitchen appliances, light switches, and office equipment. These serve to illustrate his points and demonstrate the principles of good design.

The book also discusses the role of psychology and human cognition in design and how designers can take these factors into account to create more effective and user-friendly products.

The post Three Books Every Engineer Should Read appeared first on Engineering.com.

]]>