TTI, Author at Engineering.com https://www.engineering.com/author/tti/ Wed, 07 Aug 2024 13:06:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png TTI, Author at Engineering.com https://www.engineering.com/author/tti/ 32 32 Enhancing battery monitoring in eVTOL applications https://www.engineering.com/enhancing-battery-monitoring-in-evtol-applications/ Tue, 06 Aug 2024 10:04:00 +0000 https://www.engineering.com/?p=52561 Harwin explores key considerations for selecting connectors to ensure an effective battery management system.

The post Enhancing battery monitoring in eVTOL applications appeared first on Engineering.com.

]]>
TTI has sponsored this post.

Written by: Ryan Smart, Vice President of Product, Harwin

(Image: TTI.)

Electrically powered vertical take-off and landing (eVTOL) aircraft are shaking up the aerospace industry. The trend is similar to the effect that electric vehicles (EVs) have had on the automotive market. This article explores the requirements of eVTOL applications, focusing on battery management and power and signal connectivity. The area is of vital importance, as the aircraft need detailed data about the onboard battery/powertrain system to run safely.

According to McKinsey, eVTOL has attracted $12.8 billion in investment over the last 12 years. Currently, around 200 companies worldwide have development projects in the sector.

eVTOL is suitable for various applications, but the most popular one is urban transportation. Aerial taxis will offer faster, greener and more efficient transfers from city locations, such as financial quarters, to airports. eVTOLs could replace the helicopter services currently used for this purpose.

eVTOL-based transportation will also be more cost-effective. Aviation fuel costs are constantly increasing, but most eVTOL aircraft don’t require any fuel. There are also other commercial and logistical benefits. The first of these is noise: eVTOL will help reduce noise pollution. At the moment, this is the main factor restricting helicopter operation at nighttime. With eVTOLs, however, commercial flights could potentially run 24/7. Replacing helicopters could also improve air quality in city centers, as eVTOLs don’t generate air pollution.

Key engineering design considerations

Unlike conventional aircraft, eVTOL must deliver vertical and horizontal propulsion. The movement can be achieved with fixed vertical rotors for take-off/landing and horizontal ones for moving forward. Alternatively, the rotors can use actuators to move between vertical and horizontal flight configurations.

Powering the constituent electrical actuators is another critical function. These actuators control the aircraft roll moment by deflecting aileron surfaces and pitch moment by deflecting the elevator. Yaw moment is managed through rudder deflection and thrust force by changing the propeller speed. The aircraft designs must also incorporate the infrastructure for distributing power to electric-propulsion motors, positioning systems, tele-networking and cockpit/mission systems.

As eVTOLs are smaller and lighter than conventional aircraft, they are also less stable. While traditional aircraft become lighter during flight as they burn fuel, eVTOLs don’t. They remain the same weight throughout the flight, which puts more stress on the structure during landing. These requirements need to be built into the design. This means using robust materials in the airframes as well as electrical components.

Importance of battery monitoring in eVTOL designs

eVTOL aircraft are powered by large Li-ion batteries. Therefore, an effective battery management system (BMS) is essential. Data relating to current, voltage, temperature and other parameters must be continuously available to ensure optimal performance and safety of the passengers. If a battery fault that should have been detected leads to an accident, the aircraft manufacturer or operator could damage their reputation beyond repair.

In an electric road vehicle, managing risks is more straightforward. The EV can automatically stop and alert the occupants if there is a risk of thermal runaway within the battery. For eVTOLs, it is not as simple. When a fault occurs, the aircraft could be thousands of meters up in the air. Likewise, if a cell malfunctions and goes offline, the effect might be severe. In a ground vehicle, it will mean a loss of traction. But for an eVTOL, a power failure could result in a sudden drop in altitude. That’s why BMS monitoring of the cells needs increased scrutiny to identify and mitigate potential problems as quickly as possible.

What to look for in a connector

The connectors used in eVTOL BMS implementations need to be chosen carefully. Here is a summary of what engineers should consider:

Compactness: eVTOLs are dependent on electrical propulsion, so the components need to be small. They must take up minimal board space and have low profiles.

Contact density: eVTOLs’ battery packs feature many Li-ion cells. Compact connectors with dense contact arrangements help achieve data acquisition more easily.

Weight: eVTOL designs have strict weight constraints, so the fuselage and hardware must be as light as possible. The same goes for the components used. Light construction helps maximize the number of passengers or the amount of cargo the aircraft can carry.

Reliability: To guarantee passenger safety, the connectors must function over a prolonged operational life without failure.

Robustness: The connectors must maintain continued operation in harsh working conditions. They will have to withstand shocks, vibrations and extreme temperatures.

EMI susceptibility: Due to proximity to electrical sources, the designs must consider electromagnetic interference (EMI). Overlooking the issue can result in poor data quality, which can affect the decisions made by the BMS.

Component expense: As the eVTOL sector is very cost-sensitive, keeping the bill-of-materials (BoM) down is vital. This, combined with the low volume levels, means that custom-built components are not an option. Instead, companies must have access to off-the-shelf products to optimize their budgets.

Quality: Complete output repeatability in the production process is paramount when supplying parts to the aerospace market. Any variation could have dire consequences. That’s why it’s essential to work with connector suppliers that conform to globally recognized quality standards.

Picking the most applicable products

Harwin has a long history of providing aerospace OEMs with high-reliability (Hi-Rel) connectors. Committed to quality engineering, its manufacturing facility is certified per EN9100D/AS9100D quality standards. When working with the eVTOL sector, the Harwin team benefits from experience with unmanned aerial vehicle (UAV) projects. Regarding size, weight and robustness, the connectors used in UAVs have similar requirements to those used in eVTOLs.

Optimized for use in various eVTOL systems, the 1.25mm-pitch Gecko connectors deliver powerful performance and reliability. The lightweight and compact components have 2A-rated contacts made from a durable Beryllium-Copper. The patented 4-finger design means that interconnections remain unaffected by even the most intense shocks and vibrations.

Some applications, such as eVTOL BMS installation, require a larger number of contacts and large signal currents. Here, Harwin’s Datamate 2mm-pitch Hi-Rel connectors offer significant advantages. They are available in single, dual and triple-row configurations. Like the Gecko series, they provide industry-leading resilience to harsh environments. This means withstanding shocks of up to 100G. Their contacts can carry 3A of current (3.3A on an individual contact). A choice of latching mechanisms makes it easy to find the best match for the available space or the operating conditions.

Harwin’s Gecko connectors are well suited to space/weight-constrained eVTOL designs. (Image: TTI.)

Gecko and Datamate connectors can come with integrated back shells to combat EMI issues. Cable assemblies are also available to accompany them. They are available in any length and configuration, even for small quantities. Harwin also offers Mix-Tek versions of both connector series. These devices make it possible to address power and data signals with just one component, saving space and simplifying design layouts.

Harwin’s Datamate connectors, widely used by the avionics industry. (Image: TTI.)

Conclusion

eVTOL will offer an environmentally friendly and more economical way of providing short-hop flights. The lower costs and 24/7 operation could make it accessible to more people. Battery reserve and performance will be central to eVTOL services, and will also provide manufacturers with a way to differentiate their models in a competitive market. Recharging speed and the distance the aircraft can travel before recharging will be key differentiators. These requirements highlight the role of the BMS function in boosting battery performance and extending its longevity. Finally, having superior BMS interconnects means that accurate data is always available. This helps maintain optimal safety in eVTOL aircraft.

To learn more, visit TTI and Harwin.

The post Enhancing battery monitoring in eVTOL applications appeared first on Engineering.com.

]]>
Component considerations for radar applications that leverage fully digital beamforming https://www.engineering.com/component-considerations-for-radar-applications-that-leverage-fully-digital-beamforming/ Tue, 30 Jul 2024 13:51:34 +0000 https://www.engineering.com/?p=52557 Here’s what you need to know about energy storage capacitors, wideband filters, bypass capacitors and other radar components.

The post Component considerations for radar applications that leverage fully digital beamforming appeared first on Engineering.com.

]]>
TTI has sponsored this post.

Radar systems are continuously evolving as threats become more diverse. These systems are expected to register anything from drones to hypersonic missiles. As a result, modern radars are becoming more agile. Increasingly, that means they rely on a multifunction array (MFA), where one array can be used for search, track and targeting as well as electronic warfare and communications functions.

The need for a single-array configuration, paired with the desire to improve signal-to-noise ratio (SNR) with an analog-to-digital converter (ADC) on every antenna element, was a catalyst for the adoption of fully digital beamforming.

With fully digital beamforming, shown below, each antenna element can transmit and receive multiple beams or split them in different directions simultaneously without interference. In addition, each element is software-defined, so control and tuning can occur on an application-specific basis. Designs that leverage fully digital beamforming use space more efficiently while achieving more comprehensive radar coverage.

Example of a fully digital beamformer. (Image: Knowles.)

Radar systems and other military applications have always been restricted by size, weight and power (SWaP) requirements. Now, engineers are up against additional size constraints to support electronics with fully digital beamforming. In addition to managing higher power consumption, integrated devices must fit in a denser phased array with an antenna pitch measuring half the wavelength (i.e., λ/2) or less for optimal array performance. Wavelength decreases as frequency increases, so size requirements only become more restrictive in high-frequency applications.

Under these conditions, there’s a variety of components that must fit into a smaller amount of board space. Here are some component selections that deserve special consideration:

Energy storage capacitors

Energy storage capacitors in radar T/R modules support pulsed operation in power amplifiers, and with high-performance expectations and little space, these passive devices are especially SWaP-challenged. Low-profile aluminum electrolytic capacitors like the MLPS Flatpack series, designed and manufactured by Knowles Precision Devices’ subsidiary Cornell Dubilier, offer high capacitance density in a flat configuration for space-saving. These military-grade capacitors are optimized for 10,000 hours at 105 °C, making them ideal for T/R modules and other system electronics that maintain high performance and reliability in a small footprint.

Wideband filters

Wideband filters with high rejection are similarly challenged by strict SWaP requirements. To protect the receiver, these filters must be positioned at every element, and as mentioned above, they must be sized at λ/2 or less to fit in the phased array antenna pitch.

Knowles Precision Devices’ 10 GHz surface mount bandpass filters support direct sampling receivers enabled by high-speed RF-ADCs. With deep expertise in high-reliability ceramic devices, Knowles Precision Devices fabricates its DLI brand filters on high-k ceramic substrate materials to achieve high performance in a footprint smaller than λ/2.

Bypass capacitors

Fully digital beamformers often include devices, like low-noise amplifiers, that can be implemented as high-frequency monolithic microwave integrated circuit (MMIC) dies. MMIC amplifiers with broadband gain need protection from RF noise on the supply line. Bypass capacitors offer an efficient path for RF energy to ground before it enters a gain stage. Look for wire-bondable microwave capacitors (rather than surface-mount) that can provide the right amount of capacitance at a high operating voltage for MMICs in high-frequency applications like radar.

High Q capacitors

Q factor, or quality factor, is a figure used to rate and compare multi-layer ceramic capacitors (MLCCs) based on merit. It’s expressed as the ratio between stored energy and lost energy per oscillation cycle. In resonant circuits, power loss is accounted for via the equivalent series resistance (ESR). Higher ESR indicates higher losses in the capacitor. In high-frequency applications, maintaining efficiency and reliability at the component level is an important contribution to performance optimization. MLCCs built with high Q material are specifically designed to overcome this design challenge.

High Q MLCCs will have a low εr value, and they’re generally built in the pF range to mitigate power loss and minimize the likelihood of overheating. High frequency and low power loss are critical parameters for radar systems. Consider MLCCs based on high Q dielectrics to ensure high performance. Knowles Precision Devices offers ultra-low ESR, high temperature, high power, ultra stable and leaded options.

High-reliability capacitors

Radar systems subject components to intense operating conditions. To ensure quality and performance over time, they must face testing at elevated conditions. Manufacturers perform accelerated life cycle testing to better inform you of a component’s limitations. For example, chip capacitors and dielectric formulations undergo burn-in or voltage conditioning to assess their reliability at a specific voltage and temperature level for a duration of time. Capacitors that fail this test usually lose resistivity under these conditions early in the test cycle.

Common high-reliability military specifications, including MIL-C-55681, MIL-C-123 and MIL-C-49467, each have their own applicable specifications for reliability testing. Work with a manufacturer that has the experience and capacity to run and document these tests. Knowles Precision Devices typically uses a test voltage that is twice the working voltage rating of the device, at 85°C or 125°C for a duration of 96, 100, or 168 hours of test time, and maintains the capacity to process approximately four million parts per month to uphold strict screening criteria.

Supporting innovation in radar systems

While many manufacturers can accommodate MIL-level screening and high-reliability applications, Knowles Precision Devices has designed and tested to these standards for decades with no field failures. The support of an experienced component design and manufacturing company with custom capabilities and extensive testing equipment is key to the continued success and advancement of radar technologies. Whether you’re working with a cutting-edge system or legacy equipment, every component selection makes a difference, so leverage a component manufacturer’s expertise. Knowles Precision Devices’ engineering team monitors current trends that impact your design needs and adapts accordingly, so your team can focus on the core research and development efforts at hand.

For more information on off-the-shelf or custom components for radar or other high-reliability systems, contact Knowles Precision Devices to connect with our engineering team.

The post Component considerations for radar applications that leverage fully digital beamforming appeared first on Engineering.com.

]]>
The 4 types of EV current sensors https://www.engineering.com/the-4-types-of-ev-current-sensors/ Mon, 15 Jul 2024 18:40:00 +0000 https://www.engineering.com/?p=52292 Learn the difference between shunt, open-loop, close-loop and flux gate current sensors for battery management systems and motor control.

The post The 4 types of EV current sensors appeared first on Engineering.com.

]]>
TTI has sponsored this article.

Electrification has a profound impact on a wide range of industries. In the past, current sensing technology was mainly used in industrial sectors, but with the introduction of the Paris agreement and the push to pursue clean energy along with the launch of rechargeable lithium-ion batteries, there are now many applications expanding current sensor use. In addition, this movement has created the need to move to a much higher range of current than ever before. Lithium-ion batteries used in automotive applications have a very high energy density; therefore, the need for high power current sensors has become paramount.

Here’s what engineers need to know about current sensor technology and how to use it in electric vehicles.

What are current sensors and where are they used?

As platforms become more electrified, current sensing is required in applications such as power conversion, battery charging in electrical vehicles, and industrial equipment and processes. To monitor electrical current flow and measurements in applications, high power current sensors can help satisfy the needs.

The Honeywell CSSV1500 current sensor meets ASIL C requirements. (Image: Honeywell.)

Current sensors are designed to detect and measure the current passing through a wire or conductor. They generate a signal that is proportional to the current, which can take the form of analog voltage, current, or a digital output.

This output signal serves various purposes:

  • It can display the measured current
  • It can be stored for further analysis in a data acquisition system
  • It can be used for control purposes to limit or stop the flow of current

Current sensors play a critical role in maintaining the safety of battery systems. In modern battery systems, they monitor two key parameters of the battery: state of charge (SoC) and state of health (SoH). To do so, they must accurately track the power consumption of electric vehicles and estimate the remaining charge in the battery.

Current sensors are designed to measure current. Shunt sensors, for example, use Ohm’s law and manipulate Ohm’s law depending on what factor is being solved. For example, if voltage is being calculated, then Ohm’s law can be rewritten as V= I*R.

Types of current sensors

There are different current sensor technologies: Shunt current sensors (or direct in-line), open-loop current sensors, close-loop current sensors and flux gate current sensors. Each has certain advantages and disadvantages.

In a shunt current sensor, a precisely calibrated shunt resistor is placed in series with the load (the part of the circuit where current needs to be measured). As the current flows through the shunt resistor (often referred to as the bus bar), a specific resistance value can be obtained along with an accompanying voltage. The resultant current can then be accurately calculated. Shunt current sensors contain a conductive copper alloy bar that is placed in series with the current source to be measured.

Shunt sensors have some advantages. They are very robust, and are also typically lower in cost due to a very simplified design. However, they also have several distinct disadvantages: excessive heat, zero off-set, poor accuracy, required compensation, susceptibility to corrosion, and creepage. Some end users prefer shunt-based solutions for low-current (50 amp) measurements. However, due to the increasing needs of current measurement range and accuracy requirements, EV suppliers are migrating away from shunt-based current sensors and changing to magnetic flux-based current sensors, especially in high-current environments above ±500 amps to ±1500 amps and beyond to further improve measurement accuracy.

Honeywell offers a variety of magnetic-based current sensors with different configurations to satisfy customers’ high current application needs and requirements. These sensors fall into three types: Open-loop Hall-effect current sensors, close-loop Hall-effect current sensors, and flux gate and advanced flux gate current sensors.

The open-loop Hall-Effect current sensor comprises a few key components: a Hall element, ferromagnetic magnetic core and an amplifier. Functioning as a transducer, the Hall element detects the presence and intensity of a magnetic field, generating a voltage value aligned with the current in the targeted conductor.

Open-loop current sensors have both benefits and drawbacks. On the positive, they offer a simpler design, reduced cost compared to closed-loop Hall technology and a notable advantage in response time. These features make them particularly suitable for applications in motor control. Additionally, open-loop current sensors exhibit higher current measurement capability and are well-suited for operation in a wide temperature range. Even though open-loop sensors can be very accurate, they are not as accurate as a closed-loop Hall-effect design. Also, if the internal circuitry is not designed correctly, temperature drifts can pose a challenge.

Open-Loop Hall-effect current sensors offer low cost, simpler design, compact size, light weight, high bandwidth and fast response time.

The Honeywell CSHV open-loop current sensor. (Image: Honeywell.)

The closed-loop Hall-effect current sensor is designed with several key components: a ferromagnetic magnetic core, a Hall-effect sensor, a secondary conductor and a feedback amplifier. The magnetic core concentrates the magnetic field. As primary current (IP) flows through the core’s conductor wire, it generates and concentrates a magnetic field within the core. The Hall-effect sensor detects this magnetic field, producing a proportional voltage corresponding to the primary current. Subsequently, the feedback amplifier amplifies this voltage and directs it back to the secondary coil, generating a magnetic field in the opposite direction.

Closed-loop current sensors can be designed to measure ac and dc currents and offer high accuracy and low temperature drift.

Closed-loop current sensors offer high accuracy, high sensitivity and linearity, lower offset error, temperature stability and immunity to magnetic field drift.

The Honeywell CSNV1500 closed-loop current sensor. (Image: Honeywell.)

Flux gate current sensors operate in a similar manner as Hall-effect based closed-loop current sensors. However, the sensing technology used to monitor the magnetic field in the sensor’s magnetic core is different. In the case of a Flux gate sensor, the primary conductor carrying the current to be measured passes through the center of a magnetic core loop. The current flow in the conductor tends to generate a magnetic flux in the core.

Flux gate current sensors are versatile, accurate, and have excellent linearity and wide frequency response. These sensors are used in a wide range of applications, including electric vehicles, EV charging stations, renewable energy systems, power and industrial automation.

The Honeywell CSNV700 flux gate current sensor. (Image: Honeywell.)

Current sensors in EVs

In electric vehicle applications, a battery management system (BMS) is a crucial and sophisticated subsystem designed to monitor, control and optimize the performance of the rechargeable batteries within the vehicle’s battery pack. The BMS continuously monitors various parameters and the health of batteries within the battery pack. This includes parameters such as voltage, current and temperature. Real-time monitoring with a current sensor allows the BMS to detect any abnormalities or deviations which stray away from optimal operating conditions.

Current sensors also play a pivotal role in motor control applications. They facilitate real-time monitoring of the current flow through the motor. They also provide robust protection for the motor and prevent potential damage caused by excessive current flowing into the motor’s windings. These sensors can initiate protective measures such as motor shutdown or alarm activation when the current exceeds predefined thresholds.

Honeywell offers an extensive portfolio of advanced current sensors. Through continuous innovation and refinement, Honeywell develops new products that offer differentiation from our competition. To learn more, visit Honeywell at TTI.

The post The 4 types of EV current sensors appeared first on Engineering.com.

]]>
Preparing for the 48-volt shift in automotive systems https://www.engineering.com/preparing-for-the-48-volt-shift-in-automotive-systems/ Wed, 05 Jun 2024 10:28:00 +0000 https://www.engineering.com/preparing-for-the-48-volt-shift-in-automotive-systems/ Changing to 48-volt vehicle architecture is inevitable, according to automotive Tier 1.

The post Preparing for the 48-volt shift in automotive systems appeared first on Engineering.com.

]]>
TTI has sponsored this post.

Ever since internal combustion engines dominated the automotive industry, vehicles have been mechanical systems — with electrical add-ons here and there. This perception is changing with the rise of electric powered vehicles and features. Consequently, the engineers tasked with making this shift a reality must overcome a challenge: modern 12-volt electrical systems can’t keep up.

The shift to more electric vehicles is pushing the automotive industry to a 48-volt standard. (Image: Bigstock.)

The shift to more electric vehicles is pushing the automotive industry to a 48-volt standard. (Image: Bigstock.)

Go back far enough and engineers only needed a six-volt battery to run headlights, turn signals and other essential features. But as more electrical systems (such as power windows, locks and stereos) became standard, the six-volt system could no longer keep everything running. As a result, the industry switched to a 12-volt standard in the 1950’s.

Modern day vehicles are mirroring this history. Engineers designing vehicles — be it cars, snowmobiles, electric scooters, jet skis and even forklifts — must now consider numerous complex, power-hungry systems and features. In automotive, for example, advanced driver assistance systems (ADAS), autonomous driving systems, sensors, infotainment modules and on-board computers are becoming standard. These electronics put too much strain on a 12-volt battery — thus 48-volt batteries are predicted to become the new standard architecture in transportation. To help smooth this transition towards a 48-volt future, Molex offers its MX-150 mid-voltage interconnect technologies.

The shift to 48-volts is inevitable

Ironically, the transfer to 48-volt systems is stymied by the sheer number of electronics within modern vehicles. According to Kirk Ulery, distribution business development manager at Molex, though the need to move to 48-volts is apparent, most standard parts available to automotive manufacturers are designed for 12-volt architectures. This makes it expensive and complex to jump to 48-volt systems.

The shift towards 48-volts is less a question of if, and more a question of when. (Image: Bigstock.)

The shift towards 48-volts is less a question of if, and more a question of when. (Image: Bigstock.)

Nonetheless, Ulery sees the writing on the wall. “We’re getting to a point where the wire size must increase in a 12-volt system to handle the amount of power you need for all the new features. That’s where the 48-volt system comes in. It goes back to Ohm’s law. When you increase your voltage by a factor of four, you increase the power by a factor of four. So, you have four times the wattage to power these devices.”

We can see the shift to 48-volts today with electric and mild-hybrid vehicles. In mild hybrids, the vehicle will stop the internal combustion engine and run on electrical power when able. It will then start the engine as required. Meanwhile, fully electric vehicles run completely on electrical motors.

“The common thing is that they are moving traditional mechanical functions off a serpentine belt to a series of electric motors,” points out Ulery. He gives an example of a heavy-duty pickup truck using mechanical energy for its power steering. In many vehicles this function is becoming electrified. “The amount of time you need the power steering was robbing some of the engine’s horsepower. By moving it to a separate electrical system, you can control that and maintain more power through the drivetrain … It makes a significant difference in the amount of power you have for the vehicle.”

This demonstrates how even heavy-duty applications are becoming electric in mechanical vehicle systems, and how much the number of power-hungry systems operating electrically is increasing. As a result, 12-volt batteries will not be able to handle the increased load. Therefore, the shift to 48-volt architectures is less a question of if, and more a question of when.

The benefits and challenges of transitioning vehicles to 48-volts

For engineers, one of the primary benefits of 48-volt architecture is its ability to operate at similar wattages using smaller gauge wires. In other words, the larger the voltage, the lower the current needs to be to maintain the power. Thus, smaller, lighter and less expensive wiring harnesses can be used to improve a vehicle’s range, price and ecological footprint.

Ulery sees this as a primary incentive for the change. “The current drive to 48-volts has to do with smaller wires. They cost less, weigh less and are easier to maneuver [around other internal systems under the hood] … Generally, smaller wires with the same amount of wattage [produce] a significant reduction in weight and costs.”

The higher voltage also creates the ability to design a wiring harness with lower resistive losses. “There are some requirements you have to look at when you get to smaller wires. They tend to have slightly higher resistance,” says Ulery. “When you design a wiring harness you know the bulk resistance. [It] is a significant factor in determining the size of the wires.” This means engineers can optimize the wire harness for weight and resistance loss by adjusting the size of the wires. Generally, 48-volt systems can result in a lighter harness that also has lower resistance losses when compared to 12-guage architectures.

Like weight, temperature is another limiting factor when designing the wiring harness of a vehicle. Electric and hybrid vehicles need to stay cool, as heat increases the rate of mechanical deterioration and can cause catastrophic battery failure. Since the shift towards a 48-volt architecture can result in a wiring harness with lower resistive losses, it will also lose less energy to heat. As a result, the higher voltage could translate into a vehicle that operates at a lower temperature.

“Temperature is an issue we calculate for whenever we’re designing automotive wiring,” Ulery explains. “We typically use a current rating that’s based on a specific temperature rise when fully energized.” He adds that a fully energized system is an extreme case, and the calculation would result in an over-engineered wire — which would produce a cooler vehicle anyway. He says, “the duty cycles are such that temperature rise is not usually an issue.”

So why not take the plunge and shift to 48-volts now? Costs, part availability and legacy systems within the automotive industry makes the transition difficult. “It’s a challenge to find components that work with 48-volts,” says Ulery. He then used the Tesla Cybertruck as an example. Though the new vehicle is primarily a 48-volt architecture, the company stepped down voltages in some locations to ensure available parts were compatible.

“No vehicle OEM builds everything,” says Ulery. “The only thing they tend to make are the engines and body panels. Everything else in the car is purchased. Back-up sensors, digital cameras, radios are all purchased. So, to change the industry will take time as suppliers need to change parts to the 48-volt architecture.”

In other words, shifting to 48-volts too soon comes with part availability, compatibility and format adoption risks. In addition, the manufacturer would need to increase development costs to design parts that work at the new voltage. So, much like the Cybertruck, as more vehicles move to 48-volts, many of their sub-systems will likely still operate at 12-volts.

MX-150 mid-voltage connector provides new options for transportation

One part that is widely available for 48-volt automotive and transportation applications are the MX-150 mid-voltage connector provided by Molex. This connector is an expansion of the MX-150 product line which is standard to the automotive industry. So, selecting this connector helps to minimize format adoption and compatibility risks.

MX-150 mid-voltage connectors. (Image: Molex.)

MX-150 mid-voltage connectors. (Image: Molex.)

“MX-150 is one of the largest global Molex product lines,” says Ulery. “With the mid-voltage [version], we have it certified up to 60-volts. We made some slight changes to our housing to make sure that we meet all IEC requirements for creepage and clearance. … So, customers are assured that when they use it for 48-volts, or even 60-volts, that the connectors will pass any type of regulatory requirements that they might be subjected to.”

In fact, the connector is already in use for 48-volt applications. And since it’s compatible with legacy MX-150 connectors, customers shifting to a 48-volt architecture spent less time and money on engineering. For example, the connector position assurance (CPA) key ensures all connections are properly mated and safe from accidental disconnections. So, the engineering to connect legacy MX-150 connectors is already done.

“You can both feel and hear the latch click,” says Ulery. “Then the CPA slides under the latch to make sure that it cannot come apart unless you physically move the key. If anyone’s ever tried to do work on even simple electronics on a vehicle, you learn very quickly how hard these things are to get apart.”

MX-150 mid-voltage connectors also have applications for other vehicles. “We’re seeing so much electrification today,” says Ulery. “Electric bicycles, scooters, even electric snowmobiles, personal watercraft and marine applications. Every one of those that’s set up to be fully electrified can benefit from a 48-volt system.”

Various electric vehicles can benefit from a 48-volt architecture. (Image: Bigstock).

Various electric vehicles can benefit from a 48-volt architecture. (Image: Bigstock).

Engineers testing parts in physical prototypes running on 48-volts must still overcome one hurdle: sourcing the parts to test. Vehicle parts tend to be available in bulk to reduce their costs for manufacturers. In other words, getting access to a few dozen connectors for the sake of testing can be difficult.

Ulery says for Molex customers this problem is solved by their partner TTI. “TTI has our reliable interconnect technologies in stock, and local contacts so [customers] can call immediately and get the parts they need. I work closely with TTI to get them to understand where connectors can be used and the value proposition to add them into designs. They have the engineering expertise to help [TTI and Molex] customers get their products out faster.”

To get access to MX-150 mid-voltage connectors and other Molex connectors via TTI, visit Molex at TTI.

The post Preparing for the 48-volt shift in automotive systems appeared first on Engineering.com.

]]>
Understanding battery management systems: Key components and functions https://www.engineering.com/understanding-battery-management-systems-key-components-and-functions/ Thu, 16 May 2024 11:02:00 +0000 https://www.engineering.com/understanding-battery-management-systems-key-components-and-functions/ Here’s what you need to know about fuses, sensors, controllers and all the other building blocks of the BMS.

The post Understanding battery management systems: Key components and functions appeared first on Engineering.com.

]]>
TTI Inc. has sponsored this post.

Batteries store more than just electricity. In a world desperate to transition to renewable energy, batteries store the promise of a greener future. And to fulfill that promise, they need the help of a battery management system, or BMS.

“Any place where there are batteries, there has to be a battery management system,” Mohammad Mohiuddin, field applications engineer at Eaton, told engineering.com.

Mohiuddin and his team help engineers design and build battery management systems that can handle the unique requirements of their applications. While there are some off-the-shelf BMSs, most of the time these crucial systems need a designer’s touch. Here’s what you need to know about how they work and why they’re so important for the energy transition.

What is a battery management system?

Today’s battery-powered applications are significantly more complex than a pair of classic AAs. Electric vehicles (EVs), for instance, involve massive lithium-ion battery packs with multiple cells connected in series and parallel. It’s essential to ensure that these cells charge and discharge at a equal rate, which enables the system as a whole to perform at its best for the longest possible lifetime. Even more importantly, it’s crucial to ensure that these batteries work safely within their operating limits, as thermal runaway is a real hazard in lithium-ion battery systems.

Primary functions of a BMS. (Image: Eaton.)

Primary functions of a BMS. (Image: Eaton.)

And EVs are easy compared to today’s energy storage systems. These are room-sized banks of batteries that store energy from renewable sources, such as solar and wind, and distribute it as needed. As with EVs, all the cells of an energy storage system must be put to optimal use and protected from adverse conditions. But while EV batteries have a capacity measured in tens of kilowatt-hours, energy storage systems can reach into the gigawatt-hour range, with significantly higher power outputs.

Complicating the matter even further is the addition of supercapacitors into the mix, an increasingly common technique for large-scale energy storage. While batteries have been a well-understood technology for many years, supercapacitors are on the frontier of energy storage. Combining the two technologies is a challenge for many of Mohiuddin’s clients. “They want to know the conditions that a supercapacitor has to be operated along with the batteries so that the two can go together,” he said.

Despite their differences, EVs and energy storage systems both solve these challenges in the same way: the battery management system. The BMS is the brain of any battery system. It’s responsible for monitoring the condition of every cell in the battery pack and distributing the load accordingly, keeping track of important parameters including state-of-charge (SoC) and state-of-health (SoH). The BMS is also responsible for optimizing the life of the battery system by performing charging and discharging in a safe and sustainable way. If something should go wrong, it’s the BMS’s job to safely bring the battery under control or shut it down if necessary.

Key components of a battery management system

Any complex battery-powered application requires a BMS customized for its requirements. But while the details will be different, there are several components common to every BMS. The below diagram shows these BMS building blocks.

The building blocks of a BMS. (Image: Eaton.)

The building blocks of a BMS. (Image: Eaton.)

If the BMS is the brain of the battery, the controller is the brain of the BMS. This chip coordinates the functions of the BMS, monitoring the state of each cell and balancing the load amongst them. The controller also maintains communication with other systems, such as an EV’s main computer. This communication can be either wired or wireless. If wired, the signal will be filtered through a common-mode chip inductor before passing through to the connector. If wireless, the controller will be connected to an RF module, typically for Wi-Fi or Bluetooth Low Energy (BLE). A power module brings down the high voltages at the BMS input to smaller values suitable for the electronics in the controller.

Closeup of the Eaton EPM12V1 power module, a non-isolated DC-DC converter suitable for battery management systems, connected to an Eaton common-mode choke and terminal block. (Image: Eaton.)

Closeup of the Eaton EPM12V1 power module, a non-isolated DC-DC converter suitable for battery management systems, connected to an Eaton common-mode choke and terminal block. (Image: Eaton.)

One of the most important components in the BMS is the primary fuse, which provides overcurrent protection to the whole battery pack. The BMS also includes a self-control fuse further down the circuit, attached to the BMS controller, that provides an additional layer of protection. “If an anomaly occurs, if the current is flowing and it is not being controlled for some reason, the controller can actually blow the self-control fuse open,” Mohiuddin said. Finally, there are additional fuses on each cell that can act quickly to shut down problematic cells without having to shut down the entire battery pack.

Another fundamental BMS component is the current sense resistor, which monitors the current coming in and out of the battery pack and feeds that data to the BMS controller. This is no ordinary resistor. It must have both an extremely small resistance, on the order of a few milliohms, as well as an extremely tight tolerance, on the order of 1% or less. It must also be able to handle high levels of power, as much as 20 watts, without breaking down. To meet these requirements, Mohiuddin explained that Eaton’s current sense resistors are designed with specialized materials.

There’s more. “These resistors are available not only in two terminals, but four terminals,” Mohiuddin said, describing a measurement scheme called the Kelvin, or 4-wire, method. “The two additional connection points allow precise monitoring of the current going through it and the voltage drop across it.”

Finally, the BMS monitors the temperature of the batteries using negative temperature coefficient (NTC) thermistors. If the temperature gets too high, the controller can adjust the current to prevent dangerous overheating.

Sourcing the right components for your BMS

With the BMS serving such an important role in today’s advanced battery-powered applications, it’s crucial for engineers to design these systems to the highest possible standards. While the specific components necessary for each BMS will differ, look for components that have been designed and tested for battery management applications. These will provide the temperature, power and durability requirements that are so often necessary in BMS design.

Eaton offers battery management system components in each of the building block categories described above. For example, Eaton’s Bussmann series CC06FA fuses are designed for automotive BMS applications, and so are Eaton’s Bussman series CSKA current sense resistors, which use the 4-wire Kelvin method for increased measurement accuracy. If you need help designing your BMS, Eaton application engineers like Mohiuddin can share their expertise.

“We advise customers on what is needed and what is going on with harvested energy from different sources,” he said.

To learn more about components for battery management systems, visit Eaton at TTI.

The post Understanding battery management systems: Key components and functions appeared first on Engineering.com.

]]>
What engineers need to know about current sensors for EV applications https://www.engineering.com/what-engineers-need-to-know-about-current-sensors-for-ev-applications/ Mon, 13 May 2024 10:10:00 +0000 https://www.engineering.com/what-engineers-need-to-know-about-current-sensors-for-ev-applications/ Whether for the BMS or motor control, here are the key specs to understand when sourcing these critical EV components.

The post What engineers need to know about current sensors for EV applications appeared first on Engineering.com.

]]>
TTI has sponsored this post.

Electric vehicles (EVs) continue to grow in popularity and market share, and electric current is the fuel of the future. Current sensors are a critical component of today’s EVs, serving two primary applications according to Ajibola Fowowe, global offering manager at Honeywell.

“The battery management system (BMS) uses current sensors, in conjunction with other sensors such as the voltage and temperature sensors, to monitor the state of charge and overall health of the battery pack. The other use for current sensors is in motor control, where it is relied on to quickly detect and isolate a fault in the electric drive,” Fowowe told engineering.com.

Regardless of use case, there are several considerations EV engineers must understand when selecting among the many available current sensors. Here’s what you need to know.

Types of EV current sensors

There are different types of current sensors that each have advantages and disadvantages for EV applications.

Closed loop current sensors

Closed loop current sensors have a feedback system for improved measurement accuracy. A magnetic core concentrates the magnetic field generated by the flow of current and provides a proportional voltage to the amount of current detected in the core. This enables the sensor to generate a precise current measurement. Because of their high accuracy and stability, closed loop sensors are well suited for use in the BMS.

The Honeywell CSNV 500 is a closed loop current sensor rated for a primary current measurement range of ±500 amps of direct current. The CSNV 500 features a proprietary Honeywell temperature compensation algorithm with digital CAN output, to provide high accuracy readings within ±0.5% error over the temperature range of -40⁰ to 85⁰ C for robust system performance and reliability.

The Honeywell CSNV 500 closed loop current sensor. (Image: Honeywell.)

The Honeywell CSNV 500 closed loop current sensor. (Image: Honeywell.)

Open loop current sensors

Open loop current sensors operate on the principle of magnetic induction. They consist of a primary winding, through which the current travels, and a secondary winding that measures the induced voltage. Open loop sensors require less additional electronics and processing compared to closed loop sensors, resulting in faster response times. However, they require additional calibration because they are more prone to variations in heat and magnetic field. This means they are also less accurate — reaching approximately 2% error of the primary readings.

The fast response time of open loop current sensors makes them ideal for motor control functions. Motor control applications don’t require the same level of precision as the BMS, so the loss of accuracy compared to a closed loop or flux gate sensor isn’t critical.

The Honeywell CSHV line of open loop sensors have a range of 100 amps to 1,500 amps, and their response times are as fast as six microseconds. They are used in fault isolation and fault detection, as well as controlling motor speed. They can also be used in battery management systems that do not require very high accuracy, such as in hybrid electric vehicles. These sensors use AEC-Q100 qualified integrated circuits to meet high quality and reliability requirements.

The Honeywell CSHV series open loop sensor. (Image: Honeywell.)

The Honeywell CSHV series open loop sensor. (Image: Honeywell.)

Honeywell’s CSNV 1500 has both closed loop and open loop functionality. This enables the sensor to meet an accuracy requirement of 1%, and is designed for applications that require high accuracy. The CSNV 1500 is used for similar EV applications as the CSNV 500, as well as stationary energy storage systems and industrial operations.

Flux gate current sensors

Flux gate current sensors measure changes in the magnetic flux of a current as it passes through a magnetic loop, from which it can derive current measurements. The Honeywell CSNV 700 is designed for applications that fall between 500 A and 1,000 A requirements. It has a better zero-offset and higher sensing range than 500 amps sensors—but it also has higher power consumption than a closed loop sensor. The CSNV 700 has similar accuracy rating as the CSNV 500, at 0.5%, and it also uses AEC-Q100 qualified integrated circuits.

As with closed loop sensors, the flux gate sensor is best used in BMS settings that require high accuracy. When using flux gate sensors, however, engineers need to be mindful of their higher power requirements, which could consume more battery energy.

Honeywell’s CSSV 1500 is a combination open loop and flux gate sensor. It was designed to meet Automotive Safety Integrity Level C (ASIL-C) requirements for safety-critical applications where customers desire a higher level of reliability and performance. While many 1500 A sensors consume more power, the combination of open loop and flux gate technologies uses less power while still meeting the accuracy and functional safety requirements. It meets Automotive Safety Integrity Level C (ASIL C) requirements for safety critical applications. This requirement is typical of battery electric vehicles (BEV).

Shunt current sensors

A shunt current sensor measures the voltage drop across a sense resistor placed in the conduction path between a power source and a load. It is an inline current sensor connected directly to the busbar; closed loop, open loop and flux gate sensors are non-contact sensors that don’t have that direct connection.

One of the benefits of a shunt sensor is that it can provide an instantaneous measurement of current. However, it generates more heat and contributes to power loss in the circuit. This creates parasitic energy waste. Fowowe says that advancements in shunt technology is increasing its attractiveness in high voltage systems and Honeywell is actively researching additional value that can be derived from the application of shunt technology such as the potential combination of current and voltage measurements into one sensor to reduce the overall cost of the BMS.

Other key considerations for EV current sensors

In addition to considering which sensor to use in which application, engineers will also need to factor in other variables. Since the sensor needs to work properly in a magnetized environment, its capacity to handle magnetic interference is important. For BMS applications that rely on a high level of accuracy, engineers will need to consider the sensor’s zero-offset, which is the amount of deviation in output or reading from the lowest end of the measurement range.

Ease of integration is also important to consider. EVs can use either controller area network (CAN bus) standard or analog outputs. CAN communication is more common in the BMS. CAN bus communication speed is limited by the CAN protocol to 10 milliseconds, which is acceptable for the BMS. For more immediate measurements, motor control functions use analog outputs, which can respond in microseconds.

Another factor to be mindful of is the EV’s driving environment. EVs need to be able to function properly in any conditions, from a heat wave in Arizona to a snowstorm in New York. Therefore, the sensor’s operating temperature range needs to be factored in. According to Fowowe, Honeywell’s sensors are built to maintain performance in temperatures ranging from -40 to 85 degrees Celsius; the sensors feature a Honeywell patented multi-point temperature compensation algorithm to ensure the sensors can deliver very high accuracy and performance under any driving condition.

To learn more about current sensors for EVs, visit Honeywell at TTI.

The post What engineers need to know about current sensors for EV applications appeared first on Engineering.com.

]]>
DC link and safety film capacitors enhance efficiency and suppress EMI https://www.engineering.com/dc-link-and-safety-film-capacitors-enhance-efficiency-and-suppress-emi/ Mon, 11 Mar 2024 10:15:00 +0000 https://www.engineering.com/dc-link-and-safety-film-capacitors-enhance-efficiency-and-suppress-emi/ High capacitance, high performance capacitors are ideal for high frequency circuits and safety-critical applications.

The post DC link and safety film capacitors enhance efficiency and suppress EMI appeared first on Engineering.com.

]]>
TTI has submitted this article. Written by Mohammad Mohiuddin, field application engineer at Eaton.

With increased speed, volume and complexity requirements, today’s electronics require enhanced EMI suppression, increased efficiency and reduced board space while meeting high compliance standards. DC link and safety film capacitors are the high-performance solution for high frequency circuits and safety-critical applications.

This article will examine the abilities and strengths of these two popular capacitor types to equip engineers with the knowledge they need to maximize their next design.

What are DC link capacitors?

DC link capacitors are constructed of metallized polypropylene film encapsulated with epoxy resin in a plastic box with two or four tinned copper-wire terminals. Filtering the high frequency components, smoothing the low frequency ripple and sinking currents from the load side that may go back to the first stage side, these capacitors often act as a buffering stage between the DC-DC converter and an inverter (DC-AC).

These polypropylene film capacitors offer considerable advantages over electrolytic capacitors. While they do not have the energy density of an electrolytic capacitor, they have a higher current-handling ability and lifetime. The metallized construction enables a self-healing property, which greatly extends this component’s lifetime.

What are safety film capacitors?

Film safety capacitors (also known as film EMI suppression capacitors) are constructed of metallized polypropylene film encapsulated with self-extinguishing resin in a case using polymer material meeting the requirement of UL94V-0. These products are offered with many different sizes, lead lengths and terminal configurations.

Standard and automotive grade families are available for each class with both DC link and safety film capacitors. Automotive grade capacitors are THB Grade IIIB and AEC-Q200 qualified products for high reliability and harsh environment applications.

(Source: Eaton.)

(Source: Eaton.)

How DC link capacitors are used

DC link capacitors are an intermediate stage between the DC source, such as utility mains, a battery or a solar panel, and an inverter. From there, the inverter will send the AC signal to the load (such as a motor, lighting, computer, or appliance).

The capacitance of a capacitor is equal to total charge stored on its parallel plates and the applied voltage.

The rated voltage of the DC link capacitor should be greater than the voltage of the DC link in order to account for the additional voltage ripple. An increase in capacitance will decrease the amount of ripple in the DC voltage. Designers often set maximum voltage ripple to be 5% to 10% to avoid using larger capacitance values. This is also recommended to keep the capacitors functioning within a safe operating voltage.

The change in current that flows through the DC capacitor will yield the change in charge (Δq). An increased switching frequency will narrow the width of each DC link capacitor’s current pulse. This narrowing decreases the stored charge variation of the capacitor. In other words, the capacitor charge variation is proportional to the PWM switching frequency. For this reason, the inverter’s switching frequency must also be considered, as the DC link capacitor must be
able to handle this frequency.

DC link capacitor benefits

There are many benefits of DC link capacitors:

  • High capacitance density saves board space and lowers the voltage ripple.
  • High contact reliability ensures the capacitor remains fitted to the PCB in the event of vibration.
  • Self-healing properties allow recovery from a temporary short form dielectric breakdown for a longer service life and less premature failures.
  • High frequency performance is possible as power supplies can use the higher switching frequencies for greater efficiency.
  • Low ELS and ESR enable the capacitor to operate more efficiently.
  • High ripple current allows the DC link capacitor to withstand larger ripple currents for more design flexibility.

How safety film capacitors are used

Power lines may receive high voltage noise from various sources in addition to their standard sinusoidal voltage (120 VAC at 60 Hz in the US, 240 VAC at 50 Hz in Europe, 100 VAC at 60 Hz in Japan, etc.). The general purpose of the power supply line filter is to control conducted emissions composed of both common mode and differential mode noise.

There are two subclasses of capacitors used for EMI noise suppression in safety-critical applications. Class X capacitors are used across the high-voltage input while Class Y capacitors are typically used from high voltage to ground. A high pulse current load is the most important feature of these capacitors, so these are generally classified according to their rated voltage and the peak impulse voltage these devices can safely withstand.

(Source: Eaton.)

(Source: Eaton.)

Safety is a major consideration for these EMI suppression filters, especially in directly connected applications where voltages (and transients) are high enough to cause injury. Failure of a Class X capacitor could lead to fire, while failure of a Class Y capacitor could lead to electrocution. UL and IEC standards and safety approvals (including UL/cUL, VDE/ENEC and CQC) are in place to protect the user.

Film safety capacitor benefits

The benefits of film safety capacitors include:

  • High capacitance stability delivers high performance over temperature, voltage and time.
  • Self-healing properties allow recovery from a temporary short form dielectric breakdown for a longer service life and less premature failures.
  • Overvoltage stress withstanding—the ability to withstand overvoltage—is one of the most critical factors in choosing a safety capacitor.
  • Flame-retardant plastic case and resin lessen safety risks from part failure.
  • UL/cUL, VDE/ENEC and CQC safety approvals meet the requirements specified by standards organizations.

DC link and safety film capacitor applications and use cases

Switching power supplies will produce noise at the switching frequency as well as its harmonics, creating the potential for interference with nearby equipment. To avoid such interference, these power supplies follow EMC regulations as well as safety regulations, especially for equipment such as medical power supplies.

The below schematic shows the main filter circuit where the filter has components to suppress common mode and differential mode noise between the switching circuits and the line. For EMI suppression, the differential mode filtering occurs between the supply lines reaching the mains while the common mode filtering reduces noise from ground loops and ground noise between systems.

Input supply with EMI suppressing safety capacitors. (Image: Eaton.)

Input supply with EMI suppressing safety capacitors. (Image: Eaton.)

While safety capacitors are used in the input supply, before the rectification (AC-DC) to prevent EMI, the DC link capacitor is used prior to the DC-AC inverter. This can, for instance, be used between a battery and an inverter for an AC motor drive circuit in an EV, or between a solar panel and solar inverter. Both applications are pictured below.

AC motor drive circuit where a DC link capacitor is used between the input converter and output inverter. (Image: Eaton.)

AC motor drive circuit where a DC link capacitor is used between the input converter and output inverter. (Image: Eaton.)
A maximum power point tracking technique where a DC link capacitor is used for droop control. (Image: Eaton.)

A maximum power point tracking technique where a DC link capacitor is used for droop control. (Image: Eaton.)

Final notes

DC link capacitors offer a desirable alternative to electrolytic capacitors with a high capacitance density in reliable metallized film-based capacitor construction. Suitable in high frequency circuits, these capacitors can withstand high ripple currents and can effectively suppress voltage ripples and prevent current flow from the inverter stage back to the source.

Safety film capacitors effectively suppress EMI in line-to-line or line-to-ground applications while withstanding overvoltage surges from transients. Adherence to safety standards ensures that these components can be easily integrated in safety-critical applications like automotive, medical and more.

To learn more, visit DC Link Film Capacitors | TTI, Inc.

The post DC link and safety film capacitors enhance efficiency and suppress EMI appeared first on Engineering.com.

]]>
Achieving more effective machine vision interfaces https://www.engineering.com/achieving-more-effective-machine-vision-interfaces/ Thu, 07 Mar 2024 10:05:00 +0000 https://www.engineering.com/achieving-more-effective-machine-vision-interfaces/ USB3, GigE, Camera Link and CXP explained. Here’s everything engineers need to know to choose the correct interface for machine vision automation.

The post Achieving more effective machine vision interfaces appeared first on Engineering.com.

]]>
TTI has sponsored this post.

(Image: TTI.)

(Image: 3M.)

With what scholars are calling the Fourth Industrial Revolution upon us, manufacturing is steadily moving towards artificial intelligence (AI) to help make production lines more efficient and to better utilize human skill in a wide range of industries. In this shifting landscape with its expanding variety of applications, machine vision has emerged as a key force in driving productivity.

Employing systems of cameras to monitor manufacturing processes, machine vision is most immediately associated with reducing operator intervention and assuring quality control. But it can do much more. It can gather and report extremely accurate information against important operational metrics. It can isolate defective product, scan inventory, identify emergent events and carry out a quickly growing array of complex tasks that help increase throughput by cutting waste and reducing downtime while better tracking product quality. But as the need for real-time machine intervention increases, the required equipment must perform in ways that were not possible even a short time ago.

(Image: TTI.)

(Image: 3M.)

Interfaces for diverse applications

Standards for interfaces and interface hardware are key to successful machine vision. These standards express the confluence of bandwidth (resolution, frame rate, bit depth, etc.), data rate, signal speed and signal integrity to meet specific goals. They also dictate the types of connectivity equipment needed — cables, connectors, assemblies, connector shells, strain relief accessories and the like — which in turn must be balanced against local network capabilities as well as short- and long-term budget restrictions.

The opportunities are vast. But manufacturers must gain a deep understanding of their needs when considering machine vision automation. Optimal performance can be different for each manufacturer, portfolio offering, system and even production line. Considerations include:

Clarity: Especially with the advent of “smart” inspection platforms with multiple inspection metrics, remote data access and capabilities to monitor equipment effectiveness as well as the state of production, applications require sharp, clear real-time imaging to help provide brand-standard product quality. This can require high data volumes, and the bandwidth to transfer it quickly and completely.

Motion: Besides higher bandwidth, machine vision systems capturing real-time data from moving platforms will require durable cables that can withstand repeated motion. Systems using conveyors, where cameras must move in linear motion, require cables tested for repeated cycles on drag chain cable trays. For processes such as pick and place, robotic assemblies must often move cameras in random directions. Here, angled connectors can ease strain on cables and locking screws can help provide robust connections during vibration and short, random motions.

Distance: Many manufacturing lines require cable assemblies that can transmit data over longer distances without significant signal loss. Longer functional cable lengths can help reduce the need for additional cameras and equipment costs.

Automation level: Greater functionality requires more robust connections. Is the machine vision system required to simply capture and present images, enhancing operator decision- making? Or is an automated control unit making decisions based on data received, such as real-time triggering?

Speed/latency: Applications involving repetitive tasks — a chief source of inefficiency when performed without automation — can operate at high speeds. High signal speed and high data rates enable higher resolution and frame rates. Also, system latency must be low enough to keep pace with real-time processing.

Integration: As systems become more complex and one- of-a-kind — more cameras, smaller camera sizes, unique positioning and more — interfaces that connect to standard PC architectures and/or do not require additional hardware can be critical to helping lower overall system costs. Compatibility with long-established standards is also a plus as manufacturers often choose to alter or expand based on what they can already easily manage and troubleshoot.

An overview of machine vision interfaces

Established or quickly-emerging interface standards shaping the direction of machine vision are USB3 Vision®, Camera Link® , and CoaXPress® (CXP), along with GigE Vision®, an industrial camera standard for data transfer over Gigabit Ethernet protocols.

(Image: TTI.)

(Image: 3M.)

USB3 Vision®

USB3 Vision is fast-growing and very popular for its compatibility and ease of integration. It accommodates high speed, high resolution data at a relatively low system cost, and allows for multi-camera configurations.

Its limitations in transmission distance often pose barriers. But recent 3M USB3 cable assemblies can deliver transmission distance up to >10 meters (greater than 14 meters depending on cable type and equipment), far beyond standard requirements for USB3 Vision cable solutions.

GigE Vision®

GigE Vision stands out for its ease of system integration. Cable lengths can exceed 32 feet and equipment is compatible with many different networks. Originally created to provide a framework for transmitting video and other control data over gigabit Ethernet networks, GigE Vision has become an easy way to connect multiple cameras at low cost. However, as it relies on the often limited bandwidth of local networks, it can have latency issues which can critically affect tasks that require high speed, real-time precision.

Camera Link

Introduced in 2000, Camera Link was the first interface built specifically for machine vision applications. With the ability to transmit high speed, high resolution data, it is excellent for applications such as line scan cameras where real-time data and low delay is required. Functional cable length can be quite short (less than 5 meters) making it very difficult to apply to mobile platforms. Also, it can be significantly higher priced when specified for systems requiring high volume data capability.

CoaXPress®

CoaXPress, or CXP, is expected to become the mainstream interface for high speed and high resolution camera systems. With the recently released CoaXPress 2.0, it can provide transmission speed up to 12.5 Gbps/channel — high throughput for real-time applications and harsh industrial environments. CXP is excellent for multiple-camera systems, but it requires relatively complex support equipment and can result in higher overall expense.

With the latest cable assemblies providing functional cable length greater than 13 meters (CXP6) and greater than 10 meters (CXP12) for flex durable and thinner diameter (less than 5 mm O.D.) cable, CoaXPress is excellent for achieving longer transmission distance in larger systems.

Choosing the correct interface for machine vision automation

This table helps match manufacturing needs with available machine vision interfaces. Further refinements can be made based on local networks, physical space and desired level of automation.

Interface Choice Factors. (Source: TTI.)

Interface Choice Factors. (Source: 3M.)

*10 meter functional cable length is available from 3M™ USB3 Vision Industrial Camera Cable Assembly, 1U30E Series. Performance of up to >14 meters can be achieved depending on the equipment used with the cable assembly.

As a global supplier of cable assemblies and related equipment in the world’s leading machine vision standards, 3M can help specify interface solutions to help you maximize your manufacturing capabilities — and beyond. Our variety of well-proven Camera Link interconnect solutions is based our extensive knowledge and experience as one of the originators of the CL standard decades ago.

We continually introduce new solutions for USB3 Vision® and CoaXPress®, helping enable cable flex, transmission distance and design configurations not previously possible in manufacturing. At the same time, we’re exploring the possibilities of implementing machine vision standards for additional innovations in areas such as electronic devices. 3M experts are available for your unique requirements.

To learn more visit TTI and 3M.

The post Achieving more effective machine vision interfaces appeared first on Engineering.com.

]]>
What is SPE and What Does it Mean for Industrial Automation? https://www.engineering.com/what-is-spe-and-what-does-it-mean-for-industrial-automation/ Thu, 18 Jan 2024 10:20:00 +0000 https://www.engineering.com/what-is-spe-and-what-does-it-mean-for-industrial-automation/ Single Pair Ethernet is poised to become a leading communication standard for industrial automation—here’s why.

The post What is SPE and What Does it Mean for Industrial Automation? appeared first on Engineering.com.

]]>
TTI has sponsored this article.

Data communication systems were originally designed to let computers and workstations—largely stationary devices in offices and data centers—share information. Cable size and weight were of little concern, and their somewhat bulky and flimsy connectors weren’t exposed to harsh conditions. But the proliferation of the Industrial Internet of Things (IIoT) and automotive communication systems has caused network designers to consider more compact and robust cabling and connector options.

One solution that is growing in popularity is Single Pair Ethernet (SPE), based on the IEEE 802.3 Ethernet communication protocol and the IEC 63171-6 cabling and connector standard. Here’s everything engineers should know about SPE and how it’s improving industrial automation.

What is Single Pair Ethernet?

Ethernet typically employs eight wires: four pairs for transmitting and receiving. Single Pair Ethernet uses one pair of wires, with the transmitter and receiver operating at different carrier frequencies over the same pair. SPE can even carry power over the data line (PoDL).

A two-wire solution reduces cable bulk and weight, facilitates tighter bends and lowers the cost of cabling. SPE’s ability to deliver power over the data line allows IIoT devices to operate without separate power supplies or batteries, and its compact connectors are suited for smaller components like sensors and cameras. Those sensors can easily send information to the cloud for data analytics via a high-speed Ethernet connection that supports data transfers up to 10 Gbps, and the spec calls for fully-shielded cables, providing maximum protection against electromagnetic interference (EMI). Additionally, SPE offers the ability to add and remove plug-and-play devices in real time.

What is IEC 63171-6?

When you think of Ethernet, you probably think of an RJ45 connector. The original RJ45 interface was an eight-wire connector designed for analog telecommunication devices (for example, the telephone), but the RJ45 that most of us know today is a digital version that has become the de facto standard for Ethernet-based LANs. As connectors go, RJ45 isn’t particularly strong or robust because it was designed to connect PCs in offices and data centers—not exactly harsh environments.

An RJ45 connector. (Stock image.)

An RJ45 connector. (Stock image.)

Even in industrial settings, RJ45 is fine for connecting workstations and servers. But with the proliferation of the IIoT, smart devices are being placed on machinery, which calls for a hardier connector. Standard Ethernet uses unshielded twisted pair (UTP) cabling, which offers a degree of immunity against moderate EMI. Again, UTP was designed for offices and data centers, which don’t generate as much EMI as large industrial equipment. For these electrically harsh environments, it’s important to use shielded cabling.

Instead of RJ45, SPE is commonly implemented with the IEC 63171-6 connector and cabling standard. Any communication system (including Ethernet) that sends bidirectional data over two wires, has a 100 Ohm impedance, and which may need power over the same wires (drawing up to four amps of current) can use IEC 63171-6 connectors for their robustness and durability.

An IEC 63171-6 SPE connector. (Image: TTI.)

An IEC 63171-6 SPE connector. (Image: TTI.)

For challenging environments like automotive and industrial applications, IEC 63171-6 connectors are compact, rugged and field terminable. They’re also available in both IP20 and IP67 rated configurations, offering protection against vibration, dust and liquids. By adhering to existing standards, IEC 63171-6 SPE interconnects are multi-sourced, decreasing the probability of supply chain issues.

How SPE and ix Industrial work together

Ethernet represents the data-link layer of the OSI networking model, making it a flexible option for networks that use multiple communication protocols. But translating from one protocol to another can result in latency, so there is a push to go fully Ethernet from top to bottom—or cloud to edge, as it were. Ethernet’s ability to coexist with other protocols gives facility managers the flexibility to upgrade their systems a little at a time.

SPE is often complemented with ix Industrial, another Ethernet cabling system that uses eight wires (four twisted pairs) for higher performance. Essentially a small, durable replacement for the RJ45 connector, ix Industrial is based on the IEC 61076-3-124 standard. While SPE can deliver bandwidths up to 1 Gbps on runs shorter than 40 meters, ix Industrial offers 10 Gbps performance at distances up to 100 meters.

Many systems use SPE at the sensor level and ix Industrial for higher bandwidth applications. The two can work in conjunction with one another, such as an industrial robot using SPE to connect its sensors to its main controller or to communicate among machines in a manufacturing cell, and employing ix Industrial to connect robots and other machinery to the factory’s LAN. ix Industrial IP20 connectors are 75% smaller than RJ45s and offer protection against shock, vibration and EMI. IP67 ix connectors offer additional protection against particulate and liquid penetration. SPE connectors, by comparison, are about half the size of their RJ45 equivalents.

Building automation systems are now adopting SPE as well, although they’re more inclined to use cabling and connectors whose specifications aren’t as rigorous as the IEC 63171-6 model. For this reason, a variety of SPE cabling protocols will emerge.

For instance, SPE can also be implemented with IEC 63171-7, which specifies a circular connector with the popular M12 form factor. In addition to two-wire bidirectional communication, this connector includes up to four dedicated power wires that can carry up to 16 amps each. In this way, a single cable could be used for systems with motors and other devices that require more current than SPE can provide by itself.

From mobility to manufacturing, SPE is surging in popularity

The automotive industry has long used the controller area network (CAN) bus as its communication standard, and it has held up well for vehicles up to this point. However, as we move to software-defined vehicles and autonomous vehicles, CAN bus’s comparatively low bandwidth (a paltry 1 Mbps) and limited expansion capabilities have been steering the industry towards SPE.

Although SPE has been around since 2011, few outside of the automotive industry have adopted it. But SPE is now poised to enter the industrial and building automation industries. Why the renewed interest? For one thing, the natural growth of SPE was hampered by the pandemic as major design changes were put on the back burner. But in the past couple years there has been a resurgence of activity on the SPE front. The return to working on-site, coupled with the rapid growth of IIoT, provides fertile ground for developing and implementing a new standard.

There’s also been a “chicken and egg” phenomenon with companies reluctant to adopt a new standard until it becomes more widespread. Few businesses are willing to gamble on a standard that could turn out to be the next Betamax. SPE seems to have reached a turning point in that regard, with larger manufacturers taking the leap and smaller ones following suit.

There are a growing number of interconnect solutions available to meet the desire. For example, the Amphenol MSPE series of plug connectors, board mount receptacles and panel mount receptacles includes IP67 and IP20 versions for both SPE and ix Industrial. Amphenol also produces adapter cables so that a system that uses a different interface can be used with SPE devices.

SPE faces competition from other standards, but advocates believe it offers the most robust solution while maintaining compatibility with other communication standards and protocols. This allows factories and buildings to gradually incorporate SPE into certain areas while keeping legacy systems intact.

To learn more about Amphenol Commercial (ACS) SPE, ix Industrial and other industrial Ethernet solutions, visit TTI.com.

The post What is SPE and What Does it Mean for Industrial Automation? appeared first on Engineering.com.

]]>
Five Key Design Considerations for Pressure Sensors https://www.engineering.com/five-key-design-considerations-for-pressure-sensors/ Tue, 19 Dec 2023 10:17:00 +0000 https://www.engineering.com/five-key-design-considerations-for-pressure-sensors/ Engineers have a lot of options when it comes to sensors, but not all sensors are equal. Here’s how to choose the right pressure sensor for your application.

The post Five Key Design Considerations for Pressure Sensors appeared first on Engineering.com.

]]>
TTI Inc. has sponsored this post.

Pressure sensors are finding their way into industrial machinery, biomedical equipment, automation systems and personal electronics. With a slew of these handy transducers available, engineers have numerous choices available. Which attributes are most important when choosing a pressure sensor?

Engineering.com caught up with Honeywell product manager Simon Anderson, who offered some insight into the key factors—accuracy, stability, configurability, portability and affordability—that drive an engineer’s choice of sensor for a particular design.

The importance of accuracy

First and foremost, you need to select a sensor pressure range that is the best fit for the intended application and identify the critical pressure range where accuracy is of the greatest importance. Design engineers also need to consider the importance of sensor accuracy over the life of the product. In other words, the sensor must be accurate and stable.

Depending on your application, accuracy can be the most important factor for pressure sensors. High-accuracy sensors are more effective at diagnosing medical conditions, allowing physicians to determine the best course of treatment. The low cost of today’s sensors enables designers to place them into home-care products, such as real-time heart and breathing monitors, facilitating continuous in situ measurements over a period of time, rather than relying on occasional sampling performed in a medical facility. This allows for a faster and more accurate diagnosis, which is obviously good for the patient, and could also reduce the treatment time and cost.

Medical facilities can reduce the spread of infections by pressurizing certain rooms to prevent germs from entering the area, improving patient care and reducing potential liabilities. Likewise, in areas where infectious disease is prevalent, as in the COVID wards that sprung up during the pandemic, rooms can be depressurized in order to prevent the virus from escaping the ward. In both cases, accurate pressure sensors are needed to regulate the system.

Air filtration systems use pressure sensors to detect when filters are becoming clogged and need to be cleaned or replaced. This significantly improves HVAC system efficiency, since the fan doesn’t have to expend as much energy to maintain a certain volume of air, and increases the blower’s life as the motor doesn’t have to work as hard. Since the differential pressure across a filter is relatively low, a pressure sensor needs to have a high degree of accuracy at these ultra-low pressures.

Related to accuracy, sensitivity is the ability of the sensor to detect very small changes in pressure quickly and accurately. Ultrasensitive sensors allow devices to go above and beyond their original purposes. For example, blood pressure monitors containing pressure sensors offering high sensitivity and a fast sampling rate can not only give the systolic and diastolic numbers, but can also detect how the heart valves are opening and closing, which can offer insight into the patient’s overall cardiovascular health. Likewise, a breathing monitor with a highly sensitive pressure sensor gives a better picture of a patient’s respiratory condition. When tracking a person’s breathing, someone with asthma or COPD will have a different graph than a person who is not afflicted by a breathing disorder. This allows physicians to see the results of various treatments, enabling them to tweak or change medications accordingly.

The Honeywell HSC Trustability series pressure sensor. (Image: Honeywell.)

The Honeywell HSC Trustability series pressure sensor. (Image: Honeywell.)

Take, for example, Honeywell’s HSC series and ABP2 series provide high levels of accuracy and sensitivity over a wide pressure range from 1.6mbar to 12bar and an extended temperature range from minus 40°C to 110°C, supporting a wide range of applications.

Pressure sensor stability

Stability is a measure of how the sensor’s accuracy may change (or drift) over time, specified as a percentage of the full-scale span. Designers should consider a sensor’s stability very carefully—how consistently it performs over the long haul (life of the product)—when choosing components for their designs. If you’re designing a product that comes with a ten-year warranty, such as a piece of medical equipment, then you’ll need a sensor that guarantees that stability over the product’s warranty period and beyond. Medical devices often come with a ten-year warranty.

The Honeywell ABP2 series pressure sensor. (Image: Honeywell.)

The Honeywell ABP2 series pressure sensor. (Image: Honeywell.)

Honeywell says that its ABP2 series pressure sensors deliver a worst-case drift of 0.6% FSS over 1,000 hours. “This is simply a measure of the stability over 1,000 hours and is the worst case across all pressure ranges,” Anderson explained. “It’s important to note that stability is not linear. The majority of drift within Honeywell sensors typically occurs within the first 500 hours of operation, whereas over 10,000 hours of life cycle testing, the stability of the sensor may only increase to 0.7% or 0.8%.”

Configurability is crucial

The pressure sensor as we know it was introduced in 1930. (Barometric pressure “meters” have been around since the 1600s, but it wasn’t until 1930 that a sensor with an electrical output was invented.) In less than a century, the industry has gone from basic analog sensors to customizable digital smart sensors.

Honeywell’s HSC, APB2 and MPR pressure sensors come in three versions: absolute, gage (pressure relative to atmospheric pressure), and differential (difference in pressure between two points). All series come in a variety of through-hole and surface-mount packages.

The Honeywell MPR series pressure sensor. (Image: Honeywell.)

The Honeywell MPR series pressure sensor. (Image: Honeywell.)

The industry is quickly trending toward digital sensors due to their ease of integration into a digital system. When a sensor is mounted within a device it could be a distance from the main board/controller, and it’s beneficial to digitize the sensor output signal to minimize signal distortion. Once digitized, a sensor needs to send its data somewhere, which is why the ABP2, MPR and HSC series include SPI and I2C connectivity options, making them adaptable and IoT ready.

Portable pressure sensors

The medical industry is also showing a demand for small, often disposable, pressure sensors to regulate the delivery of fluids or medications for patients at home. For example, a patient with a respiratory condition may wear a mask connected to a breathing monitor device. A large sensor would need to be placed on the machine, which could be at the end of a tube several meters away. Instead, a small lightweight sensor can be attached to the mask, allowing the readings to be taken closer to the point of interest, increasing the measurement accuracy. Honeywell’s HSC series offers a footprint of 10 x 13 mm and a 14 mm height. The APB2 and MPR series are even more compact, coming in at just 7 x 6 x 6 mm and 5 x 5 x 6 mm, respectively.

Single-use sensors could enable a medical infusion pump to deliver medications over a 24-hour period, or attach to a smart inhaler that ensures that the drugs are being delivered at the correct rate. Also, a pressure sensor can measure a patient’s inhalation to determine whether they are inhaling deeply enough to send the medicine to the correct part of the lungs. In these cases, the single-use sensors can be active for a few months at a time before being discarded. Honeywell plans to offer a complete line of disposable pressure sensors sometime in the near future.

From a sustainability standpoint, disposable single-use sensors don’t exactly fit the green model, but the sensors can’t withstand the sterilization process and hospitals are not willing to risk patient health, not to mention liability, in order to reduce solid waste. Perhaps the next innovation in disposable sensors will be one that is partially or completely biodegradable.

Sensor affordability at the bleeding edge

In an ideal world, at least from an engineering perspective, the above four factors would be the main considerations. But in this world, we have to deal with economic realities, too. Marketing determines the price of the product you’re designing based on the competition. Accounting establishes the expected profit margin that the product will generate. So, be sure to appease the fiscal types by choosing components that do the job and fit the budget.

Not long ago, medical companies were inclined to use only technology that was tried-and-tested—in other words, old. Since the pandemic, many companies have been more amenable to incorporating leading-edge technology into their products in order to take advantage of the improved accuracy, stability and portability that today’s sensors provide. But the leading edge is called the “bleeding edge” for a reason. To minimize risk while still taking advantage of innovative technology, manufacturers are willing to trust newer technologies—even in medical devices—from well-established, reputable companies.

Honeywell offers a host of engineering design utilities, including evaluation boards, CAD models and technical notes. An array of application notes and selection guides help engineers to choose the right sensor for the job and see how it’s used in other designs. The company also works with customers to design and build custom sensors and modules to be integrated into their clients’ devices. 

Visit Honeywell from TTI, Inc. to learn more.

The post Five Key Design Considerations for Pressure Sensors appeared first on Engineering.com.

]]>