Erin Winick Anthony, Author at Engineering.com https://www.engineering.com/author/erin-winick/ Thu, 13 Jun 2024 17:43:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Erin Winick Anthony, Author at Engineering.com https://www.engineering.com/author/erin-winick/ 32 32 The thrilling engineering ushering theme parks into the digital era https://www.engineering.com/the-thrilling-engineering-ushering-theme-parks-into-the-digital-era/ Thu, 04 Apr 2024 14:36:00 +0000 https://www.engineering.com/the-thrilling-engineering-ushering-theme-parks-into-the-digital-era/ The most fun engineers can have at amusement parks may just be a behind-the-scenes tour of the technologies—like CAD, simulation, robotics and generative AI—reshaping them.

The post The thrilling engineering ushering theme parks into the digital era appeared first on Engineering.com.

]]>
When you walk through the turnstile entrance to many amusement parks, you are immediately surrounded by technology. State-of-the-art animatronics populate trackless rides. AI assistants guide you to your next stop. The most adorable robots you’ve ever seen cross your path on the way to the funnel cake stand.

Every industry is digitally transforming at its own pace, but those in the theme park business have never been scared of going fast.

“I believe we are generally the first to embrace new technologies and sometimes we’re the origin,” Charles Laureano, senior director of operations for Six Flags Over Texas, told Engineering.com.

Amusement parks hide their high tech behind the sensory overload of flashing lights, thrill seeker screams and the intoxicating combo of cotton candy and corn dogs. (Image: Unsplash / Devon Rogers.)

Amusement parks hide their high tech behind the sensory overload of flashing lights, thrill seeker screams and the intoxicating combo of cotton candy and corn dogs. (Image: Unsplash / Devon Rogers.)

From using 3D printing and CNC machines to quickly manufacture seasonal décor to the use of CAD software and real-time 3D to better simulate the customer experience, new technologies can be seen in both customer facing areas of parks and behind the scenes in ride design. This willingness to quickly adopt new tools has led to a rapid digital transformation that is launching the amusement park industry forward—and you don’t have to be a thrill seeker to be thrilled by this case study in successful digitalization.

Test running new tech, in and out of simulation

Being one of the first to implement new innovations at a large scale requires robust in-park testing. When your creations are used by tens of thousands of customers a day, there is only so much information you can gain from simulation and small group feedback. Laureano says Six Flags often trials new tech at a test park, typically Six Flags Over Texas in Arlington.

“We use the park to formulate a scalable plan, install the infrastructure and test the product,” Laureano said. “Sometimes we’re trying several iterations of what the product could be to see how our guests react and how it could help or hinder our own operations. From there it goes out to different sized parks before getting rolled out to the entire company.”

Test processes like this are used for everything from trying a new maintenance software scheme to an automated parking system to mobile food ordering.

Another major park operator, Disney, has been testing out robotics outside of rides and into park pathways. In Star Wars: Galaxy’s Edge, Disney engineers are running trials of independently roaming robots. The childlike “droids-in-training” push the bounds of sensors and human-robot interaction, inquisitively navigating theme parks streets and approaching guests.

Disney engineers testing a droid-in-training. (Image: Disney.)

Disney engineers testing a droid-in-training. (Image: Disney.)

“Engineers and animators both are key to a project like this. Engineers are creating an electromechanical system that can physically walk and balance and move, while it’s the animators’ job to artistically craft and shape these movements into a personality. And taken together this creates a relatable character rather than just a robot,” said Joel Peavy, executive R&D Imagineer at Disney, in a November 2023 video (see below).

The key to the whole experience is simulation. By training the robots to move in a simulated environment, Disney was able to program their unique movements in “what amounts to years of learning in the real world in just a few of hours in the simulation,” according to Moritz Bächer, associate lab director at Disney Research, in the same video. “And with this technology we can make inexpensive 3D printed robotic characters come to life very, very quickly.”

Teaching a droid-in-training to move in simulation. (Image: Disney.)

Teaching a droid-in-training to move in simulation. (Image: Disney.)

Although these robots were tested first in simulation software, in-person test runs are giving the Disney team a better idea of what types of interactions and behaviors spark the most joy for guests, and discover any issues the robots have with navigation.

Interactive robots and other emerging technologies are certainly exciting, but for patrons of amusement parks the most impactful tech is much more familiar. Laureano says there is by far one technology that has had the biggest impact on the industry in the past decade: smartphones.

“Creating mobile apps and having our park system integrate on that platform has been integral to our continued success,” Laureano said. “Everything from purchasing your day ticket or season pass, to flash pass for our rides, mobile ordering your meals and even taking a survey on your visit are all built around the mobile platform.”

Last year Six Flags even rolled out a generative AI virtual assistant on their mobile app, powered through a partnership with Google Cloud.

Supporting increased cell phone use over the large footprint of an amusement park required utility upgrades in addition to software development. The Six Flags Over Texas has installed nearly 100 Wi-Fi access points, 14 cell antennas and nearly five miles of fiber on their property to handle the load.

Upgrading the rider experience

When you are quickly turning and looping through a ride, it’s easy to miss just how many technologies, innovations and advanced elements come together to create a modern ride experience.

For example, a ride like Justice League: Battle for Metropolis in Six Flags Over Texas features roving ride vehicles that have 3-degrees of freedom, 16 channel onboard audio, movement of up to 6 feet per second, 4K projection technology and what Laureano describes as “a state-of-the-art gaming system.” It features several interactive screens, animatronics and practical effects like fire and fog.

Without digital tools like simulation, computer aided design (CAD) and sensors, rides like this would be impossible to create. However, designers don’t want visitors to be thinking about this complexity during their ride. The technology should not be front and center to the experience, but rather augment it.

An Autodesk Revit model of the Uncharted: Enigma of Penitence ride. (Image: Sally Dark Rides.)

An Autodesk Revit model of the Uncharted: Enigma of Penitence ride. (Image: Sally Dark Rides.)

“I think something that a truly good dark ride needs is a perfect marriage between a very compelling story and exciting uses of technology. So how to make the show truly as immersive as it can be is the main goal of what we do here,” Sally Dark Rides CAD supervisor Michael Torres​​​​ told Engineering.com.

Dark rides are indoor attractions common at amusement parks that guide guests through a scene-driven adventure. Sally Dark Rides creates themed rides which rely heavily on animatronics and characters. Most recently they opened Spongebob’s Crazy Carnival Ride, a trackless dark ride at Circus Circus in Las Vegas, Nevada. Systems like this rely heavily on sensors and predetermined programming to guide ride cars through a scene.

“That trackless system really just helps tell the story. The fact that you don’t see a track make it almost like you’re kind of floating in the water,” Torres said.

Transforming amusement park design

This quick adoption of new technologies means the industry is constantly pushing digital transformation both in their rides and parks directly, and in the manufacturing processes required to create them.

Tools like CAD and simulation software, as well as 3D printing and CNC machines have become essential parts of the park and ride design process. They have radically reduced the time required for engineering calculations and construction.

An Enscape rendering of the Uncharted: Enigma of Penitence ride. (Image: Sally Dark Rides.)

An Enscape rendering of the Uncharted: Enigma of Penitence ride. (Image: Sally Dark Rides.)

“I have a pretty active imagination and I can envision what something looks like in my head. But, things like simulation software and 3D renderings help share what that vision is to a broader audience and when it comes to construction, can help really bring the details to life,” Laureano said. “It provides a clear vision and ability to effectively communicate, which is key when you’re trying to get the smallest of details right.”

Sally Dark Rides relies on AutoCAD and Revit as their primary design tools, supplemented by Enscape and Unreal Engine for high quality renderings.

 “We try to use Enscape to see how the show looks from the actual guests’ view,” Torres said. These real-time renderings allow for improvements to the guest experience before construction even begins.

3D model of Spongebob’s Crazy Carnival built in AutoCAD and Revit, and taken through other 3D base software before being fully

3D model of Spongebob’s Crazy Carnival built in AutoCAD and Revit, and taken through other 3D base software before being fully “show-dressed” with texturing for previsualization in Unreal Engine. (Image: Sally Dark Rides.)

Once the models are fully designed and rendered on the computer, they move to prototyping the physical version. This is where newer rapid prototyping techniques come in handy.

“We do 3D printing for scale mockups of our characters, and scale mockups of certain scenes that have very large show actions,” Torres said. Show actions are important points in the ride or story, so seeing these mockups allows designers to better visualize animatronic movements that are important to guests.

An FLSUN 3D printer used by the Sally Dark Rides team.( Image: Sally Dark Rides.)

An FLSUN 3D printer used by the Sally Dark Rides team.( Image: Sally Dark Rides.)

Next up, they create the frame of the full-scale character, which is passed along to the pneumatics team that enables the character to move. The programming team takes the character and adds in the commands, before it goes onto art for finalizing the look of the character. The collaboration of these groups outputs the talking Mr. Krabs or gesturing Squidward that all the guests see.

“Why I am in this industry is honestly the big payoff of seeing how the guests react to what we have helped bring to life,” Torres said. “That part is the most rewarding.”

In addition to prototyping uses, Six Flags even uses 3D printed parts in their final attractions. This is especially helpful for seasonal projects with a shorter lifespan.

“The haunted house attractions we design and build heavily rely on 3D printing and our CNC machines,” Laureano said. “We will typically design a scenic piece or even part of a costume and then 3D print it. From there we can make changes or we can create molds so we can mass produce parts. The team recently printed hundreds of custom flower elements in different “blooming” positions for a [haunted] house we built for Scream Break.”

Whatever the new technology, it is likely to find some use in theme parks to streamline the amusement park experience and take rides to the next level. Amusement parks serve as a prime example of how technology can be used to refine user experiences, and test the limits of human-human and human-technology interactions.

A 3D model created in Autodesk 3ds Max by Sally Dark Rides when designing the Justice League: Battle For Metropolis ride. (Image: Sally Dark Rides.)

A 3D model created in Autodesk 3ds Max by Sally Dark Rides when designing the Justice League: Battle For Metropolis ride. (Image: Sally Dark Rides.)

That push for innovation and digital transformation starts with the people passionate about this new era of amusement parks.

“Together as a team, we are constantly leveraging technology to not only make things easier for our guests, but more fun,” Laureano said. “Our team, the rides, the lighting, the sound, the experience all rely on technology to create an immersive experience. It’s part of everyday life and embracing technology and finding new ways to use it is really a ton of fun.”

The post The thrilling engineering ushering theme parks into the digital era appeared first on Engineering.com.

]]>
Engineers are the secret to this research vessel’s success https://www.engineering.com/engineers-are-the-secret-to-this-research-vessels-success/ Mon, 11 Mar 2024 13:16:00 +0000 https://www.engineering.com/engineers-are-the-secret-to-this-research-vessels-success/ Armed with a 3D printer, compact CNC mill and design software, engineers on the research vessel JOIDES Resolution fix just about anything.

The post Engineers are the secret to this research vessel’s success appeared first on Engineering.com.

]]>
Marine Electronics Specialist Jurie Kotze works with the software paired with the CNC machine on board the ship. (Image: Erin Winick Anthony)

Marine Electronics Specialist Jurie Kotze works with the software paired with the CNC machine on board the ship. (Image: Erin Winick Anthony)

When you work in a factory environment and something breaks, it’s simple to order a new part online or head to the manufacturing floor to make a quick fix. Replacement stock and expedited spares shipping is easy to take for granted as a manufacturing engineer.

But engineers and scientists working in extreme environmental conditions don’t have those conveniences and must be able to adapt on the fly.

That’s the case aboard the JOIDES Resolution (JR), a scientific research vessel that conducts months-long research missions at sea without any resupply shipments or stops at port. The vessel is equipped with a large drill researchers use to collect rock cores from beneath the sea floor for scientists to study past climate, life in extreme conditions and much more.

Operating in this capacity since the 1980s, the JR recently wrapped up its Expedition 401, which studied a period 5-8 million years ago when the Mediterranean Sea disconnected from the Atlantic Ocean, causing it to largely dry up and form a 1,500-meter-thick layer of salt on the sea floor. This had a huge impact on Earth’s oceans and climate. By studying this period, scientists hope to understand how Earth responded to this extreme period and how to use that knowledge to improve our climate models for the future.

Conducting this type of science is a complex endeavor, and not just because of the research. It involves numerous moving parts, more than 100 people working on the ship for two months and keeping a decades-old ship running smoothly.

Meet the manufacturing crew

To ensure the ship’s expeditions stay on track, the science operations team outfitted the JR with a miniature machine shop. Although space is limited, resources are finite and there is no supply chain, the team has evolved the shop into an effective at-sea problem solving facility.

When you need something fixed at sea—whether it is a complex piece of laboratory equipment or a scientist’s glasses—Expedition 401’s Marine Electronics Specialists Etienne Claassen and Jurie Kotze are there to get it done. In his 15 years of service aboard the JOIDES Resolution, Claassen has collectively spent more than six years at sea, sailing, machining and fixing just about everything on the boat. Kotze succeeded his father on the ship after getting a background in biomedical engineering. The South African pair bring their wealth of design and manufacturing experience to every out-of-commission saw and part they create in SolidWorks.

Manufacturing tools located in the lower decks of the JOIDES Resolution. (Image: Erin Winick Anthony

Manufacturing tools located in the lower decks of the JOIDES Resolution. (Image: Erin Winick Anthony

“In the middle of the ocean, you can’t possibly prepare for everything. These days everything has little plastic gears and parts in it. That stuff gets worn out so quickly. A little drop and it just breaks, so there are a million parts here that you will never have on board,” says Kotze.

Since the ship operates 24 hours a day for two months straight, each of the specialists works a 12-hour shift—from noon to midnight or midnight to noon—ensuring someone is always awake if a machine goes down.

The tools on the ship are kept in a room off the scientific labs deep in the ship’s hull in a room with low ceilings. The walls are lined with drawers filled with tools and materials.

When Claassen first started on the ship, the main tool was a hand mill, but the equipment selection has improved significantly over time. A small hand lathe was the first big addition, but when that broke, they took a big jump to a CNC mill, a Tormach 770, a compact mill designed for prototyping.

The JOIDES Resolution can’t handle larger pieces of equipment you might typically find on a manufacturing floor. However, their reasonably-sized CNC mill paired with their RAISE3d Pro2 3D printer can address most of their manufacturing needs. Combined with some hand saws, a lathe and mill, the technicians make the most of the space on board. Devices like the 3D printer are securely attached to tables or the floor to ensure even in waves and extreme weather, they will stay in place.

A compact CNC mill. (image: Erin Anthony Winnick)

A compact CNC mill. (image: Erin Anthony Winnick)

In addition to the tools, some raw materials and other critical supplies are kept on hand. Drawers are stacked full of resistors, transistors, caps and power supplies for repairing old electronics. Next to the machines are stashes of large pieces of brass, bronze, aluminum, plastic, Delrin, nylon, wood and Teflon stock.

“We would rather go and buy something bigger and we can always size it down. We have quite a lot of stock. Not like a machine shop, but enough we can get by,” Claassen said. “And as soon as we use it and we know we are going to use it again we order more.”

Their most commonly used materials are aluminum, wood, Delrin, and PLA plastic filament for the 3D printer. Although PLA is by far the most used 3D printing material at sea, ABS, polycarbonate, nylon, and flexible Ninjaflex filament is also kept on hand.

On-site manufacturing to the rescue

The machines and devices Kotze and Claassen must repair are extremely diverse. Sometimes it is cutting-edge geological research equipment. Other times it is a decades old piece of hardware.

“A lot of stuff on the ship is obsolete. We have to take both new stuff and old stuff and make it work,” Claassen said.

The 3D printer came in handy when the engine room of the ship was looking to replace some caps. No longer manufactured or available to purchase, Kotze was able to take one of the parts, model it in SolidWorks, and 3D print all new replacements in PLA.

“The 3D printer [was something] we got as a little side thing. But over the years it has become such an important tool,” Kotze said.

Despite PLA parts being considered relatively weak when exposed to outdoor environments for long periods, Kotze has even seen great success using 3D printed parts to create high-use outdoor items.

“I know PLA is very weak against sun and water, but the mechanism we have on the Vibration Isolated Television (VIT) to take water samples we 3D printed with PLA. After many deep runs into the ocean, the PLA parts still work,” Kotze said.

And deep runs is right. The VIT contains a camera and tools that are sent down to the ocean floor. For some expeditions, that can be thousands of feet below the surface. That means the 100% infill PLA parts that help sample water have been able to withstand high ocean pressures, salt water and extended periods in the sun.

But plastic does not do it all. Repair tasks can require a more durable part. That is where the mills and lathes come into play.

When a small but crucial part of the ship drill’s top drive failed, the drill floor crew came to Claassen for a quick fix. Without it, the drill was out of commission and the science came to a halt.

“A small thing took me an hour to make, but it would have put the whole expedition in jeopardy because of one little part,” Claassen said. “I’m not saying I saved the expedition, but unfortunately if you don’t have that part, you can’t drill.”

Creating your own distributed manufacturing location

While the JOIDES Resolution may be a unique floating laboratory, there’s many more people taking advantage of manufacturing facilities in remote locations. Astronauts are 3D printing tools on the International Space Station. Military operations are looking at manufacturing options in distant outposts.

Of all the tools he works with on the ship, Kotze really sees a 3D printer as the most important asset for these types of manufacturing scenarios.

“I think if you have a 3D printer, stable power, and design software and you’re in the middle of nowhere, you can make any part that’s within a foot cube of size,” Kotze said.

Claassen and Kotze also recommend either a hand mill, or if you can get one, a CNC.

“If you really have something on a remote location, lets saying you’re building a new Antarctic base, a CNC machine is important,” Claassen said.

With the advancement of 3D printing paired with compact CNC technology, remote manufacturing outposts have the opportunity to create truly self-reliant hubs.

The post Engineers are the secret to this research vessel’s success appeared first on Engineering.com.

]]>
The Arcade Classic That Tilted Digital https://www.engineering.com/the-arcade-classic-that-tilted-digital/ Tue, 17 Oct 2023 09:00:00 +0000 https://www.engineering.com/the-arcade-classic-that-tilted-digital/ Pinball machine manufacturers have reinvented the game using simulation-driven design, rapid prototyping, data analytics and more—and digital transformation has never been more fun.

The post The Arcade Classic That Tilted Digital appeared first on Engineering.com.

]]>
Chiming bells. Snapping flippers. The click-clack of a score changing.

The sounds of pinball evoke a classic 1980s arcade filled with teenagers pushing quarters into coin slots. Old pinball machines are nostalgic pieces of technology transporting us back to this time.

But modern-day pinball is a whole different technological game. Like many other products, pinball has had to reinvent itself in the digital age, with today’s state-of-the-art machines sporting cameras, large LCD screens, internet connectivity, increasingly complex code, custom animation and more.

Playfield of Stern Pinball’s Godzilla machine. (Image: Stern Pinball.)

Playfield of Stern Pinball’s Godzilla machine. (Image: Stern Pinball.)

Pinball machine manufacturers have had to adapt. By increasing their use of CAD and simulation software, adopting 3D printing and other methods of rapid prototyping, embracing data analytics and diversifying their engineering workforce, pinball makers have brought the game into the 21st century—and set a high score for digital transformation.  

Digitizing pinball design and testing

Pinball is a merging of art and engineering. The aesthetic appeal is what draws players in. Engaging gameplay and functioning machines keep the players around. The continued introduction of new design tools has helped merge these two worlds.

“Our manufacturing and fabrication skills have been very positively affected by CAD because it inherently brings reliable repeatability to automation,” George Gomez, chief creative officer at Stern Pinball, told engineering.com. Founded in 1977, Stern is now the largest producer of pinball machines in the world, reporting sales growth of 15-20% every year since 2008.

Pinball machines go through a lot of abuse out in arcades and bars around the world. Players slap the sides and nudge the machine around. The point of the game itself is to launch the pinball into bumpers, drop targets and slings. Simulation tools have helped companies like Jersey Jack Pinball, the second largest producer of pinball machines, ensure their designs can stand up to these forces.

Jersey Jack game designer Eric Meunier told engineering.com that the company uses Solidworks for 3D modeling as well as for stress and strain simulations. These analyses are a valuable tool when designing the plastic injection molded pieces that populate a pinball machine.

“You need to understand how it will wear when you‘re rocketing an 88-gram steel ball at it,” Meunier says. “It‘s important to do that analysis ahead of time before you’ve invested tens of thousands of dollars in tooling.”

CAD model of Jersey Jack’s Godfather pinball machine. (Image: Jersey Jack.)

CAD model of Jersey Jack’s Godfather pinball machine. (Image: Jersey Jack.)

But pinball is a physical game, and simulations alone don’t cut it. That’s why Meunier also relies on rapid prototyping. He says he can use his tool shop’s laser cutter to go from a Solidworks sketch to a physical part in “under an hour.” He also uses 3D printing on a near-daily basis.

“As I’m conceptualizing something, I want to throw a ball at something and see how it works,” Meunier says.

Bring on the data

The addition of modern features in pinball games has not only been to draw in players. It has also been a bonus for designers. With internet connectivity, pinball machines can evolve over time with software updates to introduce completely new gameplay for years after release. The connection also uploads an abundant amount of data back to machine manufacturers.

On Stern pinball machines, players can scan a personal QR code before playing to keep track of their progress and, if they’re good enough, to land on worldwide high score boards. On the back end, this provides a wealth of information to Stern’s designers.

“It provides mountains of real data on game and feature performance from the global network of games,” Gomez says. “We’ve never had so much insight.”

A QR code scanner on the new Stern Venom pinball machine. (Image: Stern Pinball.)

A QR code scanner on the new Stern Venom pinball machine. (Image: Stern Pinball.)

By observing the parts of the game players most interact with and what strategies they gravitate towards, Stern can work to better balance the game. It also keeps dedicated players coming back, wanting to see what has changed with the machine through software updates.

“It’s no different than any other connected product such as your computer, phone or car,” Gomez says. “The wealth of data that we get from connected games gives us incredible insight that is no longer based on someone’s gut feeling or speculation about how something works. It also provides insight into geographic and cultural trends and performance.”

The changing workforce

As pinball has evolved, so too has the process of designing and building pinball machines. In the old days, Gomez says, a single designer could lay out the playfield and configure the relays to create the game’s logic. They might have some help from a mechanical or electrical engineer to get the game into production, but it was largely a solo effort. Today, that’s changed completely.

“Today’s teams include the designer, a lead developer, numerous additional software engineers and the artists,” Gomez said. “The motion graphics team is full of specialists, storyboard artists, modelers, animators, movie makers, and UI specialists. Of course, there are sound designers, composers and performers creating audio. Electrical engineers and technicians design the electronics platform that supports all devices, displays, audio and game logic.”

This combination of artists, engineers, and pinball experts come together to create the design, code and hardware for the game. Then comes the complexity of manufacturing and assembly. A significant amount of the work is still done by hand due to the mix of materials inside the cabinet and the intricacy of these machines.

“If you try to find other products that include glass, wood, plastic, steel, cabling, hardware, and circuit boards… automotive, aerospace, and not much else has all these different commodity styles. So we’re looking to some of those industry leaders on how they build their teams effectively,” Meunier said.

Krystle Gemnich, a production quality technician at Jersey Jack, makes an adjustment before a Toy Story 4 pinball machine playfield moves further down the line. (Image: Jersey Jack Pinball.)

Krystle Gemnich, a production quality technician at Jersey Jack, makes an adjustment before a Toy Story 4 pinball machine playfield moves further down the line. (Image: Jersey Jack Pinball.)

Meunier says that Jersey Jack has grown from five people when he started with the company in 2012 to more than 100 employees today. During that growth period, increasing diversity in their workforce has been a priority to try to involve more people in the largely male dominated world of pinball.

“We have increased and diversified our engineering strength to include women and other minorities that provide a different perspective on pinball and on engineering. It’s great to have other people playing and helping design the games,” Meunier said.

To keep up with increased demand, Stern just made some major manufacturing changes. The company massively increased their manufacturing space, investing in new employees and several new facilities totaling over 200,000 square feet in Elk Grove Village, Illinois.

“The move has allowed us to step back and look hard at the flow of material, the application of tooling and the distribution of the work we do. Any move is an opportunity to re-evaluate methods,” Gomez says.

Behind-the-scenes of Stern Pinball’s manufacturing floor. (Image: Stern Pinball.)

Behind-the-scenes of Stern Pinball’s manufacturing floor. (Image: Stern Pinball.)

Going forward, Gomez sees their major workforce growth shifting to handle the massive amounts of information being fed back to them by the connected pinball machines.

“The next wave of engineers we hire might be data engineers and scientists to improve our ability to interpret the data,” Gomez says.

As far as what other industries can take away from this growth and digital transformation, Gomez thinks the biggest lesson is on the business side.

“I think we’ve re-invested in our business carefully; letting the business we do guide our growth realistically. I also think it’s important for the staff to understand goals and targets so that everyone is pulling the rope in one direction.”

The post The Arcade Classic That Tilted Digital appeared first on Engineering.com.

]]>
The Next Generation of Wireless will be Powered by AI https://www.engineering.com/the-next-generation-of-wireless-will-be-powered-by-ai/ Thu, 12 Oct 2023 10:36:00 +0000 https://www.engineering.com/the-next-generation-of-wireless-will-be-powered-by-ai/ Learn how machine learning is making a dent in 6G, Wi-Fi, networking, chip design and more.

The post The Next Generation of Wireless will be Powered by AI appeared first on Engineering.com.

]]>
Artificial intelligence (AI) and machine learning (ML) are seeping their way into all corners of industry, and wireless communication is no exception. Engineers are beginning to see the impact in everything from networking to chip design. And as the industry moves towards 6G, AI is only going to become more prominent.

Companies from Cisco to Keysight Technologies to Qualcomm are all exploring the use of AI in communications systems: troubleshooting, power saving, channel estimation, MIMO detection—the list goes on. And as much as AI is already changing wireless design and providing helpful tools to engineers, there is even greater potential for the technology in the future.

Streamlining wireless systems

The first applications of AI in the wireless industry have been for streamlining existing systems.

“The industry is using AI techniques to improve upon existing wireless communication, engineering of systems and networks,” Houman Zarrinkoub, principal product manager at software company MathWorks, told engineering.com.

A neural network used for signal classification. (Image: MathWorks.)

A neural network used for signal classification. (Image: MathWorks.)

One of the ways this is often seen is within network design space. Artificial intelligence is helping designers manage larger networks and systems that can be cumbersome to handle.

“Resource allocation, scheduling and dividing finite resources within a large subscriber base is one of those applications where AI is shining right now,” Zarrinkoub says.

Wi-Fi is among the wireless standards that is starting to reap the benefits of AI. “There’s a new standard called 802.11az which is dedicated toward positioning and localization, and there is a new standard called 802.11bf which is essentially designed for wireless sensing. Both of these things have AI algorithms associated with them,” Zarrinkoub says.

AI is also impacting wireless chip design, part of a broader industry trend. In 2020 Google developed a reinforcement-learning algorithm that designed chip floor plans for Google’s TPU. Google Deepmind announced further developments this year, using AI to design specialty semiconductors. Numerous organizations are taking on the task of pushing AI-powered chip design even further.

For wireless chip design, AI is helping at the power amplification step of signal transmission. As society has moved through the generations of wireless from 3G to 4G to 5G, the transmission bandwidth requirements have gone up. This is where Zarrinkoub sees AI-application potential, especially as this requirement continues to go up for 6G.

Matlab users can apply neural network-based digital predistortion (DPD) to offset the effects of nonlinearities in a power amplifier (PA) (Image: Mathworks.)

Matlab users can apply neural network-based digital predistortion (DPD) to offset the effects of nonlinearities in a power amplifier (PA) (Image: MathWorks.)

“Imagine you have one gigahertz bandwidth. These systems are physical systems. Maintaining linearity, meaning that energy is distributed evenly at the first frequency until the last frequency, is a nightmare,” Zarrinkoub says. “But nowadays everybody we talk to on the device side is asking for the application of AI in digital predistortion and in maintaining linearity of a power amplifier.”

AI design tools

Companies like MathWorks, DeepSig and Ekahau are providing tools to support the next era of AI-powered wireless design. MathWorks’ Matlab software offers machine learning, statistics, deep learning and reinforcement learning toolboxes.

“In the Matlab environment, we provide all the algorithms that are out there. We grow them based on the new innovations that are helping with either training-data-driven approaches or simulation-driven approaches. Then we use those foundational AI applications and products to apply it to the vertical applications,” Zarrinkoub says.

The company website touts that these applications could help generate training data in the form of synthetic and over-the-air signals, label signals collected from wireless systems, or augment signal space by adding RF impairments and channel models to user-generated signals.

One of the common concerns with machine learning algorithms is the concept of “black box” AI. This is an algorithm that can operate and make choices, but cannot show how those outcomes were reached. Zarrinkoub says it has been a focus in Matlab to make it accessible and clear where AI-based results are coming from.

“If you are in doubt of what we are doing underneath the hood, you open the hood and you see what algorithm was used for training, testing, measurement and so on,” Zarrinkoub says.

Future impacts of AI on wireless

The 3rd Generation Partnership Project (3GPP), the standards body behind 5G, is currently studying the involvement of AI in the communications industry. The group unites seven telecommunications standard development organizations to create technical specifications and technical reports for mobile systems. Although not creating AI and ML models themselves, the frameworks and evaluation strategies made by 3GPP have the potential to guide and propel the future use of these technologies in the industry. Their newest standards release is expected soon.

“They are some promising applications of AI in there,” Zarrinkoub said.

Many future applications of AI and ML are focused on optimization. They prioritize saving designers time and reducing time for decision making in systems. For example, although classical techniques are still being used in the coordination between the base station and user equipment, AI is being discussed as a possible way to improve the system.

“With AI the amount of information sent between back and forth is going to be much reduced. Essentially, that process is going to be a lot cheaper, less computationally intensive and much cheaper in terms of spectrum,” Zarrinkoub said.

While there is focus on bringing AI solutions into the mix as soon as possible, many organizations are also concentrating on potential larger scale adoption in the 5G to 6G transition. As the industry shifts to 6G, everyone is hoping to bring in as many AI- and ML- driven design optimizations as possible.

“As AI models and testing best practices mature, there is no doubt that AI will revolutionize wireless communications in the next 5-10 years,” Sarah LaSelva, Keysight Technology’s director of 6G marketing, said in a company blog earlier this year.

No matter the speed of adoption, AI has definitely found a home in streamlining the wireless communications industry. As Zarrinkoub says, this is because many of the issues that engineers face are a perfect match for the potential of AI-powered technologies.

“In the wireless domain, the problems that are inherently nonlinear or inherently multivariate, where the number of parameters is substantially bigger than we are used to, these are the problems for which AI provides all the right solutions,” Zarrinkoub says.

The post The Next Generation of Wireless will be Powered by AI appeared first on Engineering.com.

]]>
The LK-99 Saga: For Engineers, the Dream of Room Temperature Superconductors Lives On https://www.engineering.com/the-lk-99-saga-for-engineers-the-dream-of-room-temperature-superconductors-lives-on/ Wed, 23 Aug 2023 16:30:00 +0000 https://www.engineering.com/the-lk-99-saga-for-engineers-the-dream-of-room-temperature-superconductors-lives-on/ From better battery life to bigger quantum computers, warmer superconductors would be a gamechanger for engineering.

The post The LK-99 Saga: For Engineers, the Dream of Room Temperature Superconductors Lives On appeared first on Engineering.com.

]]>
The scientific community has been abuzz the past few weeks because of two pre-print articles released on arXiv claiming the discovery of a room temperature superconductor, LK-99. (Read the papers here and here. Note that they were not peer reviewed prior to release.)

These publications brought to the forefront of conversation the potential impact of what a room temperature superconductor could mean for many different fields. As the hunt to replicate or debunk these results comes to a conclusion, here is a recap of the LK-99 saga and what this area of research could mean for engineering and technology development.

What are superconductors?

Superconductors are a big deal because they can transport electricity without any loss of energy. Unlike normal conductors, superconductors have zero resistance. With nothing to restrict the flow of electricity, numerous technologies and worldwide infrastructure could be improved.

In addition to being ultra energy efficient, superconductors also allow us to create powerful magnetic fields, allowing for their use in the creation of MRIs and magnetically levitated (maglev) trains.

Researchers have already discovered a number of superconductors. The US Department of Energy estimates that approximately half of the elements in the periodic table display low temperature superconductivity. However, they still require sub-zero temperatures or extremely high pressures. This is not feasible for most everyday uses of the technology, but these materials have found some current uses in transportation and medicine.

What is LK-99?

Published on July 22 by South Korean scientists Sukbae Lee, Ji-Hoon Kim and Young-Wan Kwon, the LK-99 papers reported the creation of a superconductor that worked at room temperature and ambient pressure. In fact, the researchers claimed that not only did it work at room temperature, but it had superconducting properties up to 127°C (around 260°F). Known as LK-99, the reported superconducting substance is made up of a combination of copper, lead, phosphorus and oxygen.

Still from a video supposedly demonstrating a pellet of LK-99 levitating at room temperature and atmospheric pressure. (Image: Lee et al.)

Still from a video supposedly demonstrating a pellet of LK-99 levitating at room temperature and atmospheric pressure. (Image: Lee et al.)

These were big claims and the scientific community was immediately skeptical, wanting to see a replication of the results before giving them credence. Repeatability is crucial in all scientific work, but immediately called upon here. Combining this with the fact that the papers were pre-printed publicly before being peer reviewed increased skepticism. Many research teams immediately started the process of trying to match what is reported in the paper.

Less than a month after the LK-99 papers were published, other researchers are reporting they believe the results in the South Korean researchers’ papers are not correct. Teams that were trying to replicate the results came across a number of red flags in the initial publication and video that shows LK-99 levitating. The evidence of superconductivity is now generally being credited instead to impurities in the substance the South Korean team created, and properties like ferromagnetism.

How room temperature superconductors could help engineers

Although it does not look like we will have room temperature superconductors within the next month, we can still dream of the ways the technology would—and maybe someday will—benefit engineers. Here are just a few examples.

Less heat production from electronics

Fans and large heat sinks are a constant in many electronics. Engineers always have to think about dissipating heat in their designs and keep in mind precautions for overheating. The main cause of this heat production is resistance in the materials conducting the electricity. Since superconductors lack this resistance, they don’t produce heat. They could help us shrink the sizes of our electronics and open upon many new technological possibilities.

Longer battery life for phones and laptops

When electricity is not lost to heat, our device can become much more energy efficient. That means electronics can put their stored energy to full use, increasing battery life. It could also enable battery sizes to be smaller to maintain the same battery life consumers have come to expect.

More efficient energy grid

With no loss of energy throughout the process of generating or moving electricity around, less energy would need to be produced, potentially lowering utility costs and emissions. Less stress would be put on the grid during hot summers and cold winters, resulting in a lower chance of power outages.

According to a quantum materials research workshop publication from the US Department of Energy Office of Science, some higher temperature (still under 0°C) superconductors are already being used in Seoul, South Korea to transport energy to high-rise buildings. Raising the operating temperature of superconductors could bring this type of technology around the world.

Bigger quantum computers

Many aspects of quantum science happen at extreme temperatures. Cooling atoms close to absolute zero can slow them down and allow quantum phenomena to be more visible. However, getting computers to those temperatures consistently requires a significant amount of energy. So it is no surprise that superconductors could help advance quantum computing, helping reduce the high energy requirements of the cryogenic systems of quantum computers, and potentially allowing the computers to grow in scale.

As a side note, scientists are also working to develop quantum computers that can operate at higher temperatures as well. Both superconductors and quantum computers functioning closer to room temperature could be a game changer.

Smaller and less expensive MRI machines

The other major application of superconductors is rooted in their connection to magnetism. As materials transition to their superconductive state during the supercooling process, they generate powerful magnetic fields. This is also known as the Meissner Effect.

MRI machines depend on the creation of magnetic fields to take medical images. According to the US Department of Energy, larger MRI machines typically lower the temperature of an alloy of niobium and titanium to superconducting levels to generate the needed magnetic fields. These MRI machines must be large and expensive to create this process.

Room temperature superconductors would allow these massive machines to be scaled down and significantly reduce their complexity.

More accessible magnetically levitated trains

Maglev trains that use low-temperature superconductors already exist. Using supercooled materials, the train system creates magnetic fields up to ten times stronger than typical electromagnets. These magnets push the trains away from the track, levitating them a few inches above its surface.

Despite the technology being available, you may have noticed maglev trains are not popping up in every city. Similar to large MRI machines, they are very expensive and not feasible for most locations. Room temperature superconductors could allow for cheaper designs that can operate at lower costs over longer distances.

Better particle accelerators

One other current application of superconducting magnets is in particle accelerators. The world’s most powerful particle accelerator, the Large Hadron Collider (LHC), consists of a 27-kilometer ring of superconducting magnets. These magnets are cooled to their operating temperature of ‑271.3°C (-456.34°F) by liquid helium. According to CERN, the creators of the LHC, “if normal magnets were used in the 27-kilometer long LHC instead of superconducting magnets, the accelerator would have to be 120 kilometers long to reach the same energy.”

Removing the need to cool these magnets could open up a new world of particle accelerator development, and potentially bring about discoveries in many other fields as well. Scientific discovery in one field would fuel scientific discovery in another.

The post The LK-99 Saga: For Engineers, the Dream of Room Temperature Superconductors Lives On appeared first on Engineering.com.

]]>
Quick Simulation and Prototyping Saves NASA’s OSIRIS-REx Mission https://www.engineering.com/quick-simulation-and-prototyping-saves-nasas-osiris-rex-mission/ Tue, 15 Aug 2023 10:21:00 +0000 https://www.engineering.com/quick-simulation-and-prototyping-saves-nasas-osiris-rex-mission/ Surprise delay forces engineers to design and build a part in 24-hours.

The post Quick Simulation and Prototyping Saves NASA’s OSIRIS-REx Mission appeared first on Engineering.com.

]]>
Next month, the first US asteroid sample return mission, OSIRIS-REx, lands in the Utah desert. In preparation for this first of its kind mission, NASA has been conducting critical rehearsals to secure, transport and analyze the samples upon return. However, an unexpected part delay put one of these rehearsals at risk.

The story of OSIRIS-Rex so far.

Engineering teams at NASA’s Johnson Space Center are a crucial part of the rehearsal process. They have been rapidly designing, simulating, prototyping and manufacturing all the tools needed to successfully handle the asteroid samples upon return. Just before a June rehearsal, the team got word that a manufacturer would not have a critical component made in time. With just 24 hours to create the part themselves, the team sprang into action. Their simulations and advanced testing were key to creating a rehearsal-ready version of the part.

The OSIRIS-REx Mission and Ensuring Sample Safety

The Origins, Spectral Interpretation, Resource Identification, Security – Regolith Explorer (OSIRIS-REx) spacecraft is on a long voyage home to bring back asteroid samples. Launched in 2016, it reached its target asteroid, Bennu, in 2018. After captured a large number of samples in 2020, it soon after began its journey to Earth.

Mosaic image of asteroid Bennu is composed of 12 PolyCam images collected on Dec. 2 by the OSIRIS-REx spacecraft from a range of 15 miles (24 km). (Image: NASA.)

Mosaic image of asteroid Bennu is composed of 12 PolyCam images collected on Dec. 2 by the OSIRIS-REx spacecraft from a range of 15 miles (24 km). (Image: NASA.)

“When it came to OSIRIS-REx and the complexity and sophistication of the sampling mechanism, the TAGSAM, it was something that hadn’t been seen before at this scale,” Salvador Martinez, lead technology development engineer at Johnson Space Center, told engineering.com.

The OSIRIS-REx mission successfully placed the spacecraft’s sample collector head into its Sample Return Capsule in 2020. (Image: NASA.)

The OSIRIS-REx mission successfully placed the spacecraft’s sample collector head into its Sample Return Capsule in 2020. (Image: NASA.)

As of July 26, NASA reported in a blog that the spacecraft was 24 million miles (38.6 million kilometers) away, and traveling at about 22,000 miles per hour toward Earth. The craft will cover those millions of miles over the next few weeks, finally landing its samples in the Utah desert on September 24, 2023. Upon landing, the exploration part of the mission is complete, but the science is just beginning.

The safe retrieval, transportation, processing, division and analysis of the sample is a complex engineering challenge in and of itself. The material choices for retrieval tools are heavily restricted and interaction with the sample is limited to avoid any contamination of the extraterrestrial rocks and dust. These samples provide a look back to billions of years ago, just as the Sun and planets were forming. Ensuring a pristine environment for these samples is crucial to getting an accurate window into the past.

“It is one more step in being able to piece together how our system formed and evolved,” said Eileen Stansbery, chief scientist at Johnson Space Center, at a NASA media event.

Ensuring Sample Safety Requires Engineering and Rehearsals

Conducting a new type of scientific study at this scale required engineers to create new tools and processes. NASA dedicated an internal engineering team to the OSIRIS-REx mission to take on this challenge.

Led by Martinez, the team oversees creation of the containment structures and tools needed to process all the samples upon their return to Earth. From rigs to change the orientation of the sample containers, to devices that can evenly divide the rocks into sample trays, there were many designs to create in a short time.

“There is a need for disassembling this TAGSAM [in a glove box] in a way that keeps the sample pristine,” Martinez said. “I think it’d be challenging to do this disassembly even on the table, much less wearing gloves and wearing a cleanroom suit. This is something that has never been tried before, so there’s a lot of trial and error, and a lot of rehearsing.”

The 24-Hour Challenge

As the date of one of the major rehearsals approached, the engineering team was working hard to get all the glove boxes and tools ready. In the midst of final preparations, they received news from a manufacturer.

“One of the components that we had sent out to be machined was not going to arrive in time,” Martinez said. “This happens to you sometimes. You send things out and you know with schedules and delays and shortages, there’s so many compounding factors throughout this entire process.”

They learned of the delay on a Monday. Rehearsal was Thursday morning, and the part had to go through several sterilizations, assembly and transportation steps before it could be used. In short, the engineering team was on a tight schedule.

“We were still working on getting all the other equipment in and we knew then we had a really short timeframe to even think of a solution for a stand-in,” Martinez said. “We just started going to the drawing board.”

Top view of the stand-in baffle being installed for curation rehearsal. (Image: NASA.)

Top view of the stand-in baffle being installed for curation rehearsal. (Image: NASA.)

The part in question was a critical baffle used to disperse the sample into dedicated containers upon its removal from the TAGSAM. The rehearsal used this part heavily to test out the hardware and how it’s used to disassemble the container and disperse the rocks.

“That was one of the components we closely analyzed and rehearsed over and over again,” Martinez said. “No matter how you pour the sample off of this TAGSAM, it actually funnels directly into these trays underneath. It’s got very complex edges and contouring.”

This was one of the most difficult to manufacture components of their design. The team had done several analyses and simulations in advance to ensure it was strong enough, while also light enough to manipulate with gloves inside of a box.

Recreating the Baffle

To originally design the part, the engineers had started with hand calculations, went to strength of material calculations and free body diagrams, and then brought in linear static analysis to look at deformations and stresses. The group primarily used PTC Creo simulation software for their FEA linear static analysis.

Simulation stress results for a baffle prototype made of Lexan material. (Image: NASA.)

Simulation stress results for a baffle prototype made of Lexan material. (Image: NASA.)

Martinez says that they simulated the use of different material options and eliminated most as options for the design. Although their final product’s material selection was limited to a few options, they also analyzed their design’s strength when created from other prototyping materials. This ensured the prototypes accurately reflected the properties of their final design.

This knowledge came in handy when working to deliver a stand-in baffle in 24 hours. While the rehearsal piece didn’t have to exactly match the final product design, they wanted to get as close as possible. Luckily, working quick and prototyping fast is something the engineering team is used to.

“The way that we had tried to develop the hardware was we iterated, and we provided solutions. As soon as we get it into the end users’ hands, we can analyze the operations and try and identify if there are any changes needed and we need to iterate again,” Martinez said.

After considering their past design work and testing, balanced with the speed at which they had to deliver their creation, they settled on their primary manufacturing technique.

“We looked at what was available to be waterjet cut because that was going to be quick,” Martinez said. “[It was] too big to waterjet it out of one piece. We had to waterjet it in two halves and make a bracket that connects both halves, two sides, and then we have to rivet them together. On top of that, how do we attach it to what’s going to be the circular boundary on the outside?”

Three engineers measuring and cutting material to attempt replicating the circular features of the baffle during their 24-hour rush to create a stand-in baffle. (Image: NASA.)

Three engineers measuring and cutting material to attempt replicating the circular features of the baffle during their 24-hour rush to create a stand-in baffle. (Image: NASA.)

The group started experimenting with bending sheet metal and adding rivets to get close to the performance of the final design without adding too much material, just like they had simulated during the design process.

“We reached the point which was about as good of an approximation as we could provide [in] a little over 24 hours from us beginning to go into the whiteboard and saying, ‘How do we do this?’” Martinez said.

The stand-in baffle was delivered on time to be sterilized and assembled in the glove boxes for the rehearsal. It featured prominently in NASA’s photographs and the video taken of the test run.

“I was pleasantly surprised after seeing it cleaned,” Martinez said. “It does not look like it was made in a day.”

Operators were able to lift and maneuver the part and test the pouring of varying sized teflon balls into the baffle which dispersed into sample containers. The balls simulated the rock and dust that will be dumped into the baffle during the actual return.

Although this was just a rehearsal, the positive results reinforced the group’s approach to processing and portioning out the very precious samples of asteroid Bennu. As the asteroid samples continue to fly closer to Earth, the engineering team is continuing their work, ensuring the tools are perfected for landing day.

The post Quick Simulation and Prototyping Saves NASA’s OSIRIS-REx Mission appeared first on Engineering.com.

]]>
Sustainability, Innovation are Key Benefits of Digital Twins in Auto Industry: Survey https://www.engineering.com/sustainability-innovation-are-key-benefits-of-digital-twins-in-auto-industry-survey/ Thu, 22 Jun 2023 09:36:00 +0000 https://www.engineering.com/sustainability-innovation-are-key-benefits-of-digital-twins-in-auto-industry-survey/ Report from Altair also finds a gap between adoption and expertise among automakers.

The post Sustainability, Innovation are Key Benefits of Digital Twins in Auto Industry: Survey appeared first on Engineering.com.

]]>
Last year, engineering.com predicted that digital twins would no longer be optional in 2023 and recommended businesses increase their exploration of the concept and its associated technology. That prediction appears accurate within the automotive industry, judging from survey results released in May by CAE software developer Altair.

(Image: Altair.)

(Image: Altair.)

Fully 76 percent of auto industry respondents said their organizations use digital twins to some degree, placing their sector second to heavy equipment (77 percent) among the 11 industry groups surveyed.

Moreover, automotive adopters claimed to be reaping substantial benefits, with 97 percent saying digital twins have helped to inform new product development and 92 percent saying they’ve helped to create more sustainable products and processes.

The survey results suggest that sustainability is an increasingly important consideration for automakers, who are pivoting from traditional internal-combustion engines toward zero-emissions vehicles. Nearly two thirds (63 percent) of automotive respondents said their organizations were purposefully using digital twins to achieve sustainability objectives; that’s eight percentage points higher than the average of all industries surveyed.

“Between consumer demand, government expectations and global emissions targets, the race is on for automakers to keep EV production on track,” said Royston Jones, Altair’s senior vice-president of automotive, in a release. “This report’s findings show the importance of digital twin technology in achieving those goals. While many have already adopted this technology into their processes, there is still tremendous room for education on the benefits that will lead to a rapid expansion of its use across the industry and beyond.”

Altair defines digital twins as “the process of using data streams to create a digital representation of a real-world asset to improve collaboration, information access and decision-making.” As virtual replicas of real-world objects, processes or systems informed by real-time data, digital twins can be used for diagnostics, simulations and tests. The data provided by these assessments can be used to optimize manufacturing processes, refine maintenance scheduling and more.

Knowledge gaps could hamper digital-twin initiatives

Despite the auto industry’s high digital twin adoption rate, only 35 percent of respondents considered themselves to be “highly knowledgeable about digital twin solutions.” That was the second-lowest percentage out of the industries surveyed. This knowledge gap could be due to a lack of experience, given that 28 percentage of automotive respondents said their organizations had been using digital twins for no more than six months—the highest proportion among the industries surveyed.

The gap between the rate of digital twin adoption and the speed at which the workforce is learning how to use them could be cause for concern. Digital twins are not easy to set up and operate. Without access to sufficient knowledge, organizations are likely to see diminished returns from their digital-twin investments.

Altair’s survey touches upon another dilemma: a mismatch of the perceived value of digital twins generally and the business case for individual organizations. For instance, a 2022 survey by business consultancy Capgemini found that 55 percent of executives surveyed believed that digital twins are a strategic part of digital transformations, but 42 percent of them struggled to envision their deployment.

Among the automotive professionals in roles below senior management surveyed by Altair, 92 percent believed their organization’s leaders would be more likely to invest in digital twin technology if they better understood its benefits, including its ability to “provide an abundance of data for more efficient research,” “reduce costs of production” and “lead to better development outcomes.”

Perhaps the biggest opportunity implied by the data is for engineers themselves to seize: career advancement through digital twin expertise.

Altair’s 2023 Global Digital Twin Survey Report Vertical Breakdown: Automotive is based on a survey of more than 2,000 professionals in 10 countries. It was conducted in May 2022.

The post Sustainability, Innovation are Key Benefits of Digital Twins in Auto Industry: Survey appeared first on Engineering.com.

]]>
How Cloud Computing Hardware Is Powering the Next Era of Space Exploration https://www.engineering.com/how-cloud-computing-hardware-is-powering-the-next-era-of-space-exploration/ Tue, 30 May 2023 05:38:00 +0000 https://www.engineering.com/how-cloud-computing-hardware-is-powering-the-next-era-of-space-exploration/ What engineers can learn from cloud experiments aboard the ISS.

The post How Cloud Computing Hardware Is Powering the Next Era of Space Exploration appeared first on Engineering.com.

]]>
Just as cloud computing enables engineers to bring computing resources where they are needed on Earth, it is now powering technology in an even more remote location: space.

NASA is using the International Space Station (ISS) as a testing ground for many technologies that could power the future of space exploration, including cloud computing.

In space, resources are scarce. Seconds can matter when it comes to safety. Cloud computing is a crucial tool that can bring space exploration into a new space age—an era when computing resources can provide quick test results and increase research capacity.

Spaceborne Computer-2 on the ISS. (Image courtesy of NASA.)

Spaceborne Computer-2 on the ISS. (Image courtesy of NASA.)

Numerous companies from Hewlett Packard Enterprise (HPE) to Amazon have already launched cloud computing hardware studies to the ISS. Here’s what they have learned and how they are laying the groundwork for cloud computing on future commercial space stations.

Off-the-Shelf Cloud Computing in Space

As more complex scientific work is conducted in space, more computing power is needed to support it. The SG100 Cloud Computing Payload was one of the first tests of how advanced computers stand up to the intense radiation of space. Created by Business Integra, this computer was based on the data computers within the Alpha Magnetic Spectrometer-02, a physics experiment that has resided on the outside of the space station for more than a decade.

SG100 was launched to empower engineers, scientists and researchers to perform significant data analysis aboard the space station before sending it back to Earth. But the first question was whether this type of cloud hardware is radiation hardened, or at least radiation tolerant.

“Radiation hardened means no matter what, radiation will not affect this processor,” said Trent Martin, SG100 Cloud Computing Payload’s primary investigator in the NASA article “Beyond the Cloud: Data Processing from Space.” “Whereas radiation tolerant means that most likely, nothing will happen—but if it does, it won’t be detrimental. The processor won’t die.”

During the 2 years that the system was tested in orbit, no data was lost to radiation damage. This demonstration paved the way for more affordable data processing beyond Earth.

Cloud Computing at the Edge—of Space

HPE installed the Spaceborne Computer-2 in the ISS with the goal of pushing the boundaries of how scientists could use artificial intelligence and cloud computing in space.

“We want to have thousands of proofs of concept so that onboard data processing can be shown to seriously benefit the scientists and engineers back on Earth,” said principal investigator Mark Fernandez, solutions architect for converged edge systems at HPE in the NASA article “New Research Launching to Space Station Aboard Northrop Grumman’s 15th Resupply Mission.” “I want to get our brilliant minds throughout the world working on the insights rather than the number crunching.”

As the name implies, Spaceborne Computer-2 is a follow-up to a successful first Spaceborne Computer study, which launched a commercial off-the-shelf computer to orbit around the same time as SG100. The goal was similar—to see how it would survive launch and space radiation. After 8 months in Earth’s orbit, NASA reported that “the Spaceborne Computer was still demonstrating teraflop performance rates while showing only a 0.03% difference to the ground computers running in parallel.”

The success led to the second experiment in 2021, which launched a computer with twice the computing power aboard a Northrop Grumman commercial resupply mission to the space station. Because the ISS has very limited bandwidth, this experiment was designed to perform the majority of the processing in space and only send back to Earth what was truly necessary.

“Many experiments that run on the space station primarily collect data and send it back to Earth,” Fernandez said in the NASA article “Technology Tested in Space is Preparing Us for the Moon and Mars.” “We want to move computing to where data are generated or collected, whether that is in space, or on your oil rig or aircraft, to turn a sample into insight as fast as possible. You process at the edge and get the go or no-go or safe-unsafe answers you need.”

As engineering.com reported in 2021, Microsoft and HPE were able to complete a 200GB genomics experiment utilizing the HPE Spaceborne Computer-2 and Azure, the cloud computing software, aboard the orbiting laboratory. To process this data, Microsoft developed a technique that would enable the system to automatically “burst” down from space into the huge network of Azure computers when it ran out of on-station computing capacity. In 2022, Azure and Spaceborne Computer-2 teamed up once again, this time to successfully help NASA’s spacewalk team use artificial intelligence to scan the gloves of spacesuits for damage. This is an activity that was previously done by humans back on Earth over long periods of time. Read more about this project in the engineering.com article, “Keeping Astronauts Safe with Cloud-based AI in Space.

Spaceborne Computer-2 and Azure working together showcase the increased speeds and insights that can be accomplished by pairing edge and cloud technology.

Azure was also recently tested by Ball Aerospace and Microsoft for processing satellite data back on Earth with successful results. The findings indicate that with the right technology, even if it’s off the shelf, information gathered in space can be quickly and easily utilized in a range of industries—from space exploration to agriculture and disaster response.

Equipping Future Orbital Outposts

Axiom is one the companies demonstrating what the future of a low-Earth orbit economy could look like. The company is currently developing its own commercially owned space station to be launched into Earth orbit.

In preparation, Axiom is first constructing a commercial module that will attach to the ISS and lead several private crewed missions to the existing orbital outpost. Aboard its first mission, Axiom-1, the company launched an Amazon Web Services Snowcone solid state drive. The small size and weight of the Snowcone—it measures 9 in x 6 in x 3 in and weighs just 4.5 pounds—made it a good choice for the space industry, which needs to count every inch and ounce.

An AWS Snowcone SSD aboard the ISS. (Image courtesy of Amazon.)

An AWS Snowcone SSD aboard the ISS. (Image courtesy of Amazon.)

The Snowcone was launched unmodified but was wrapped in orange Kapton tape (a commonly used tool in space) to provide extra electrical and thermal protection. The crew used this cloud device in orbit to analyze photographs taken during science experiments with machine learning. Research activity aboard the ISS can produce terabytes of data each day, meaning any increase in data processing speed can make a major impact for researchers.

For this trial, the photos were first stored in Network Attached Storage (NAS) aboard the space station, and then transferred to the Snowcone over a local LAN for analysis. Once the files were on the Snowcone, Amazon said in the blog post “How We Sent an AWS Snowcone into Orbit” that the machine learning model, which was scanning and identifying all equipment in frame, took only 3 seconds to run. By eliminating the need to send all the information back to Earth, the full photo analysis process was reduced from a typical 20-hours to 20 minutes.

This is just the first step for Axiom, though. As Axiom moves to constructing its own station, it plans to partner with Microsoft and LEOcloud to move in-space cloud computing from the experiment stage to the execution stage.

The ISS has been constructed over the course of many decades, meaning its computer capabilities have received only incremental improvements over time. By creating a new station from scratch, Axiom has a real opportunity to offer a big leap in computing to the space community.

The post How Cloud Computing Hardware Is Powering the Next Era of Space Exploration appeared first on Engineering.com.

]]>
3 CFD Simulations in Space Answer Big Questions https://www.engineering.com/3-cfd-simulations-in-space-answer-big-questions/ Fri, 28 Apr 2023 05:20:00 +0000 https://www.engineering.com/3-cfd-simulations-in-space-answer-big-questions/ How NASA and its partners are using CFD for bleeding-edge aerospace research

The post 3 CFD Simulations in Space Answer Big Questions appeared first on Engineering.com.

]]>
Computational fluid dynamics (CFD) simulations have been used in the aerospace industry for decades to create lightweight designs, improve aerodynamics, reduce friction during high velocity scenarios like reentry and much more. But the experts at NASA and their partners are always pushing the envelope to see how CFD can expand state-of-the-art technology and even help us explore further into the solar system.

Here is a look at three unique examples—some big and some small—all of which make meaningful change in the aerospace industry using CFD.

Big Ass Fans in Space

What works on Earth is not guaranteed to work in space, and that includes airflow models. The Kentucky fan maker, Big Ass Fans, uses CFD models in Ansys to perfect its blade shape and fan designs, as well as simulate airflow in its clients’ spaces.

Ansys simulation of the Big Ass Fans CFD experiment conducted aboard the International Space Station. (Image courtesy of Big Ass Fans.)

Ansys simulation of the Big Ass Fans CFD experiment conducted aboard the International Space Station. (Image courtesy of Big Ass Fans.)

But when the company had the chance to test its fans and models in a NASA environment, it jumped at the opportunity. In 2019 Big Ass Fans launched a scaled-down study that ran fans on the International Space Station (ISS). The goal was to test whether CFD simulations of airflow were impacted by microgravity, or if these simulations could accurately be used for airflow conditions on future space flight missions.

The first photo taken on Big Ass Fans’ mascot, Fanny, in space inside the company’s experiment. (Image courtesy of Big Ass Fans.)

The first photo taken on Big Ass Fans’ mascot, Fanny, in space inside the company’s experiment. (Image courtesy of Big Ass Fans.)

“Our ultimate goal in the project started out as a call to action within the company,” says Mike Smith, prototype engineer on the project, to engineering.com. “What came out of it was, let us do a CFD experiment with Fanny [the company’s mascot] in space and collect the data and then run that against our CFD simulations. We compared and contrasted the data that we collected in microgravity at the space station versus how we would see it calculated using our traditional CFD methods.”

Smith handled the design and placement of the project’s wind tunnel and assembly of the prototype, and chose how to mount and locate the sensors in the 1-foot-by-1-foot container. The team did not expect microgravity to cause substantially different airflows because the container was a sealed and controlled box, but were excited to launch the study to find this out.

“It came out like we expected, but the exciting aspect of proving that we had the capability, whether it was here on Earth or in space, to be able to give accurate simulations was a big part of it for us,” Smith said.

This data and results were given to Space Tango, the company that helped package and integrate this study for launch, as well as others in the microgravity research community. Having a better understanding of the accuracy of airflow CFD simulations in space will benefit the scientific community for years to come.

While Smith says Big Ass Fans is not planning to launch any more fans to space anytime soon, he is still seeing the company’s CFD simulations make an impact in crucial space facilities back here on the ground.

“Terrestrially here on Earth, a lot of the NASA facilities and a lot of Air Force facilities do have our products on site,” Smith said. “So, while we may not be sending one of our fans directly to space, we are helping get things to space.”

Monitoring Fuel Slosh

Spacecraft must carry fuel for launch and their long journey ahead. This means that engineers have fought the forces caused by fluid motion since the early days of space travel. Until very recently, the full impact of that propellant slosh in low gravity situations was not well understood, meaning increased uncertainty and risk for payloads.

NASA astronaut Mike Hopkins holds a plastic container partially filled with green-colored water, which was used for the SPHERES-Slosh experiment aboard the International Space Station. (Image courtesy of NASA.)

NASA astronaut Mike Hopkins holds a plastic container partially filled with green-colored water, which was used for the SPHERES-Slosh experiment aboard the International Space Station. (Image courtesy of NASA.)

The SPHERES-Slosh study conducted aboard the ISS helped change that, providing the groundwork to propel new space slosh research.

“Before we launched this, there really wasn’t any data of low gravity slosh like this for the purposes of validating CFD,” says Brandon Marsell, fluids and CFD engineer at the Launch Services Program of the NASA Kennedy Space Center, to engineering.com. “That is what we built this for, and so getting that data was a really big deal. It is the only set of data in the world for this.”

After seeing launches from international partners delayed for slosh issues, a team led by principal investigator Paul Schallhorn in NASA’s Launch Services Program saw the need to verify that their microgravity CFD models were accurate. His group teamed up with the Florida Institute of Technology and Massachusetts Institute of Technology to provide a slosh dataset for the whole industry.

The setup involved a tank that was attached to a Synchronized Position Hold Engage Reorient Experimental Satellites (SPHERES) within the station. These free-floating robots are instrumented platforms that were attached to the experiment’s clear tanks, which contained green water.

Teams on the ground ran simulations for how they expected the fluid to perform under specific forces in OpenFOAM, STAR-CCM+ and other CFD software. The SPHERES then moved to create these forces. Astronauts were asked to move the tanks at faster speeds as well. While the CFD simulations made by the team matched adequately in some scenarios, the experiments showcased where the models based on existing data fell short.

“It wasn’t terrible. Often, especially if we could get a good initial condition, they would track fairly well at the beginning,” says Jacob Roth, SPHERES-Slosh study coinvestigator, to engineering.com. “It is a standard situation where you have a ton of forces that are small but build up over time and things start to deviate. And then there are some things that CFD simply could not capture.”

Those things that could not be reflected in models included the interactions of air bubbles and even some surface tension interactions in microgravity.

The data collected from this experiment is already being used in slosh calculations for current spacecraft. Marsell was working on the Magnetospheric Multiscale (MMS) Mission, which had strict requirements and small margins.

“We ended up using this CFD code that was validated by the ISS data and coupled it to a controls code,” Marsell says. “We ran the mission that way to get more accurate numbers on what sort of delta-v we can expect at payload separation.”

In the future, as refueling stations are set up in space for spaceships and satellites, this knowledge will be even more important. Fluids and our understanding of them are what will get us to the Moon, Mars and beyond.

CFD Studies to Go Back to the Moon

NASA plans to return to the Moon in the next few years by launching humans aboard the Space Launch System (SLS) rocket for the first time. The Artemis II mission, planned to launch in late 2024 to orbit the Moon, will have a crew of four: NASA astronauts Reid Wiseman, Victor Glover, and Christina Hammock Koch, and Canadian Space Agency astronaut Jeremy Hansen.

Putting people on board a new rocket comes with a certain degree of risk, but teams are using CFD tools to make things a bit safer.

Simulation conducted on the separation of the solid rocket boosters from the SLS rocket. (Image courtesy of NASA.)

Simulation conducted on the separation of the solid rocket boosters from the SLS rocket. (Image courtesy of NASA.)

SLS has two solid rocket boosters mounted on its side for the initial liftoff. Around 2 minutes into flight, they separate and fly away from the rocket thanks to 16 separation motors. This period of flight is extremely hard to model but is a crucial time to examine to ensure these boosters do not hit the main core of the rocket.

Engineers are using two main tools to simulate this period of flight: FUN3D and OVERFLOW.

FUN3D, or Fully Unstructured Nacier-Stokes, is a custom CFD suite of fluid flow modeling tools developed at NASA for use in aeronautics, space, technology and exploration. OVERFLOW is a NASA-developed CFD flow solver. All CFD simulations for this project were run on the Pleiades and Electra supercomputers at the NASA Advanced Supercomputing facility. See a magnetic field simulation that was also run on the Pleiades supercomputer here.

According to NASA, the aerodynamic data for these booster simulations is “a function of 13 independent variables: six describing position (three for translation and three for rotation), four describing free-stream conditions (angles of attack/sideslip, Mach number and air density) and three describing thrust conditions (core, booster, and booster separation motors).”

“Ultimately, the results of these simulations will be critical in reducing the risk to the crew of Artemis II, the mission’s first crewed flight test, during booster separation,” said Jamie Meeroff, acting deputy chief of NASA’s Computational Aerosciences Branch at Ames Research Center, in an article on NASA.gov.

This information feeds into the creation of a database that can then be used by the Guidance, Navigation and Control team at Marshall Space Flight Center.

The post 3 CFD Simulations in Space Answer Big Questions appeared first on Engineering.com.

]]>
MSC Reports Apex Release Can Build CAE Model Assemblies in Minutes https://www.engineering.com/msc-reports-apex-release-can-build-cae-model-assemblies-in-minutes/ Mon, 12 Jun 2017 09:45:00 +0000 https://www.engineering.com/msc-reports-apex-release-can-build-cae-model-assemblies-in-minutes/ Apex Grizzly release speeds up modeling and expands structural analysis capabilities.

The post MSC Reports Apex Release Can Build CAE Model Assemblies in Minutes appeared first on Engineering.com.

]]>
Apex Grizzly software by MSC. (Credit: https://www.youtube.com/watch?v=zXoOe1HruFg)

Apex Grizzly software by MSC. (Credit: https://www.youtube.com/watch?v=zXoOe1HruFg)

MSC Software Corporation has announced Grizzly, the seventh release of MSC Apex, their Computer Aided Engineering (CAE) platform.

Grizzly expedites modeling and validation tasks in an integrated and generative workflow, allowing rapid iteration on the design to validate the stiffness, strength, and stability of large assemblies. Operations that previously took hours can now be performed in minutes.

The release also introduces new geometry clean-up and de-featuring tools to eliminate manual rework, allowing the user to automate model preparation tasks. Additionally, a new implementation of glue and tie connections speeds-up assembly creation and allows users to create large assemblies of parts using mesh dependent connections while preserving the product structure.

“With MSC Apex Grizzly we have seen our Beta testers build and validate designs of thousands of parts in a day,” said Hugues Jeancolas, MSC Apex Senior Product Manager.

MSC is also working to solve the issues faced by users working with massive assemblies such as large cranes, ship hulls, or shipyard ramps. These assemblies often include thousands of parts welded together, and present unique challenges to finite element modeling, model validation and simulation. Users may need weeks to months to build finite element models, and often struggle to run simulation for more than a few design iterations. Apex Grizzly is working to provide a more accurate alternative to the previous approximate methods used to evaluate these large fabricated structures.

Additionally, MSC Apex builds on its previous strides in integrated and generative framework with new analysis readiness remedy tools and an improved user experience scenario set-up, model validation and simulation execution. MSC Apex Grizzly also features a macro record and replay capability to help users develop python scripts to automate geometry modeling, meshing and scenario set-up tasks. Overall, Apex Grizzly is built around the goal of creating a faster and more accurate tool.

This release comes some after some of MSC’s other new releases and acquisitions, including their acquisition of autonomous vehicle simulation software company VIRES Simulationstechnologie and release of Adams Real Time.

Apex Grizzly will be available for use in June 2017.

The post MSC Reports Apex Release Can Build CAE Model Assemblies in Minutes appeared first on Engineering.com.

]]>