Michael Alba, Author at Engineering.com https://www.engineering.com/author/michael-alba/ Mon, 27 Jan 2025 22:04:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Michael Alba, Author at Engineering.com https://www.engineering.com/author/michael-alba/ 32 32 AI takes on sketching and drawings in Autodesk Fusion https://www.engineering.com/ai-takes-on-sketching-and-drawings-in-autodesk-fusion/ Mon, 27 Jan 2025 21:56:09 +0000 https://www.engineering.com/?p=136039 Sketch AutoConstrain and Automated Drawings show that AI doesn’t have to be perfect to save time for CAD designers.

The post AI takes on sketching and drawings in Autodesk Fusion appeared first on Engineering.com.

]]>
At the annual Autodesk University user conference last October, Autodesk announced three AI-based features coming to Autodesk Fusion: Sketch AutoConstrain, Automated Drawings and Autodesk Assistant.

All these features are now live. Autodesk Assistant is, for the moment, a product support chatbot rather than the interactive helper described at AU. But Autodesk has delivered on AutoConstrain and updated Automated Drawings with new AI functionality, and we got to see them in action.

Engineering.com got a demo of these two new AI tools from Jeremy Stadtmueller, director of product management for Autodesk Fusion, and Bryce Heventhal, senior manager of technical marketing at Autodesk.

Here’s what we learned about Sketch AutoConstrain and Automated Drawings and what else is on the horizon for Autodesk Fusion.

A look at Fusion’s Sketch AutoConstrain

Sketch AutoConstrain—officially known as AutoConstrain in Fusion Automated Sketching—is an AI tool that analyzes sketch geometry to suggest dimensions and constraints. For example, AutoConstrain will add a perpendicular constraint to two lines drawn at right angles. Or a tangent constraint to a line touching an arc. Or a colinear constraint to the midpoints of two circles drawn side by side. And so on.

It’s as easy as hitting the AutoConstrain button in the Fusion sketch menu. The tool generates a list of ways your sketch could be partially or fully defined. You can review the options to pick the one that best matches your intent—or generate more options until you find one that works. Fusion will automatically apply the dimensions and constraints. You can edit all of them manually and continue to use AutoConstrain to update your sketch as often as you like.

Sketch AutoConstrain presents users with several options to define their sketch, and always the option to generate more. (Image: Autodesk.)

The current version of AutoConstrain will not change your sketch geometry. Soon, however, it could have the ability to make slight tweaks, like rounding a dimension from 0.998 to 1. That functionality could exist today, Stadtmueller said, but “people get real nervous when tools change what they’ve drawn.” Autodesk wants users to feel comfortable with AutoConstrain, and that means taking it slow.

Our first question during the demo was, bluntly: Is this a gimmick? The answer was no. For one thing, Heventhal suggests that AutoConstrain will prove helpful for new Fusion users.

“One of the biggest major frustrations in learning CAD is the sketch,” he said. “Fully defining your sketch is… where most people screw up.”

With AutoConstrain, beginner Fusion users could avoid the frustrating errors of over- or underdefined sketches (and their colleagues could avoid the pain of having to fix their rookie mistakes).

What about experienced CAD modelers? These users may imagine AutoConstrain as a kind of 3D Clippy, a tool that aims to be helpful but just gets in the way.

Heventhal, an experienced CAD user, attests otherwise. Even he was skeptical of AutoConstrain at first, but with some slight adjustments to his workflow he realized the AI tool could do most of the work he once did manually to define sketches. Now he’s hooked.

“This is probably my most used [new] tool,” Heventhal said. “I’ll just throw out a couple dimensions, hit AutoConstrain and then go from there.”

It’s still early days for AutoConstrain. Stadtmueller said that right now, the AI is about as good as the best heuristic-based auto-dimensioning tools on the market. But the key difference is that AutoConstrain is always learning, aggregating data from across the Fusion userbase.

“It’s going to get better and better and better,” Stadtmueller said.

Automated Drawings is smartening up

Automated Drawings has been in Fusion since January 2024, and Stadtmueller said the tool “has seen huge adoption over the year it’s been out.”

Fusion Automated Drawings takes a 3D model and generates 2D drawings of the full assembly and each of its parts. That core functionality is based on user templates and heuristics, though Autodesk has now incorporated some AI-based features and plans to add more. The tool now includes an AI model that scans your geometry to classify standard fasteners and exclude them from the drawings.

Running in the cloud, the Automated Drawings process takes only a few minutes, according to Heventhal. In the demo he showed Engineering.com, the tool took seven minutes to generate 53 drawings.

An assembly drawing generated with Automated Drawings. (Image: Autodesk.)

The drawings aren’t perfect. You’ll likely need to edit them up to your aesthetic standards. But yet more automation can help: For any drawing, you can invoke the Auto Dimension tool. Like Sketch AutoConstrain, Auto Dimension will generate a list of ways to arrange the dimensions for your drawing. You can pick the one you like best and manually adjust it from there.

Auto Dimension generates several ways to place the dimensions of a drawing. Each can be customized with options for density and datum location. (Image: Autodesk.)

Even if it did nothing else, Automated Drawings saves users the time of manually creating pages for each part of their assembly. That time adds up, especially when you have lots of parts. And with the dimensions placed as well, users have only to refine their drawings rather than create them from scratch.

“The goal of this is to create these prints really quickly,” Stadtmueller said. He conservatively estimated that Automated Drawings can do 60% of the work of creating drawings, and that’s just a start. “Someday I think we’ll get to 100%,” he said.

Like AutoConstrain, Automated Drawings—at least the AI-based parts of it—will learn as users interact with it. That’s already happening with the fastener classifier. Autodesk also plans to use AI to learn how to place dimensions more elegantly, and in accordance with personal or company preferences. The more data these AI models get, the better they’ll be.

But both of the features we saw—Sketch AutoConstrain and Automated Drawings—are just the beginning of Autodesk’s plans for AI.

Autodesk’s North Star vision for AI

AI tools like AutoConstrain and Automated Drawings aim to make CAD more efficient. But even if they succeed, they’re still tacked on to a system that was built in another era.

As Stadtmueller put it: “Why do I care what a sketch engine needs to be stable?”

CAD, Stadtmueller said, is way too hard to use. While it’s important to help today’s users with time-saving tools like Sketch AutoConstrain and Automated Drawings, there’s a grander AI project at play. Stadtmueller called it Autodesk’s North Star vision for AI, a long-term goal to “change the paradigm on how design and manufacturing is done.”

What that paradigm change will look like, exactly, remains to be seen (who can say what AI will look like next year, or even next month?). But it may come quicker than you think. Stadtmueller suggested that Autodesk’s North Star vision for AI may come to fruition in a term that doesn’t feel all that long: just five to ten years. In the meantime, Stadtmueller said, we can expect Autodesk to deliver plenty of helpful AI features for the current design paradigm.

The post AI takes on sketching and drawings in Autodesk Fusion appeared first on Engineering.com.

]]>
Nemetschek Group’s new AI Assistant is a start—but a small one https://www.engineering.com/nemetschek-groups-new-ai-assistant-is-a-start-but-a-small-one/ Tue, 21 Jan 2025 18:31:51 +0000 https://www.engineering.com/?p=135834 The AI chatbot will debut in Allplan and Graphisoft, and eventually spread to Nemetschek’s whole portfolio.

The post Nemetschek Group’s new AI Assistant is a start—but a small one appeared first on Engineering.com.

]]>
Welcome to Engineering Paper, our weekly roundup of design and simulation software news.

Today’s top story is Nemetschek Group’s new AI Assistant, a chatbot which will debut in both Allplan and Graphisoft Archicad.

In Archicad, the AI Assistant will be able to interact with BIM models in limited ways. For example, you could ask the chatbot to render your model in some particular style (such as with a wooden façade), and it will return an image generated with Nemetschek’s “AI Visualizer” powered by Stable Diffusion. You could also ask the AI Assistant to reveal some specific elements of your model, such as “the wall section at the East entry,” and it will bring up the proper view.

In Allplan, the assistant connects to the internet to help users find industry knowledge such as the minimum width of emergency exits in London.

You can see a brief demo of these capabilities in this video from Nemetschek:

This is the first manifestation of Nemetschek’s plan to launch an “artificial intelligence layer” across its portfolio this year, a plan which wasn’t so much a roadmap as a signpost declaring that Nemetschek has, in fact, heard of AI and does, in fact, plan to do something with it.

Well, this is something. The AI Assistant could prove to be a nifty feature for users of Allplan and Archicad, but by now chatbots are basically the “Hello World” of AI applications—the first step everyone takes when trying to figure out a new language. The real question is how far Nemetschek can go from here.

CAD in point: Acquisitions and updates

Here are some quick hits for your news radar:

  • Software reseller GoEngineer announced that it’s acquired Canadian reseller CAD MicroSolutions, effective as of January 3, 2025. CAD MicroSolutions customers will retain access to their current software licenses and annual maintenance plans, and can call the same support line as before, according to an FAQ posted by GoEngineer.
  • Jetcam released an update for CAD Viewer, its free software for viewing 2D CAD files. The update adds folder and file count display, window position and size memory, and other quality of life improvements.
  • Hexagon has acquired CAD Service, an Italian developer of visualization tools. Effective January 21, 2025, CAD Service will join Hexagon’s Asset Lifecycle Intelligence division.
  • Datakit announced version 2025.1 of its data exchange software, which includes enhanced support for 2D and 3D B-Rep geometry alongside other updates.

One last link

You have to love it when CAD marketers get catty. Piggybacking on the popularity of Peter Brinkhuis’ blog post 37 things that confuse me about 3DEXPERIENCE, Onshape posted a blog of their own: 37 Ways Onshape Simplifies What 3DEXPERIENCE Overcomplicates.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Nemetschek Group’s new AI Assistant is a start—but a small one appeared first on Engineering.com.

]]>
Fun times before the CAD revolution https://www.engineering.com/fun-times-before-the-cad-revolution/ Tue, 14 Jan 2025 18:53:52 +0000 https://www.engineering.com/?p=135650 The times they are a-changin’ for computer aided design. Meanwhile, why not make a game of it?

The post Fun times before the CAD revolution appeared first on Engineering.com.

]]>
Welcome to Engineering Paper, a weekly column serving you fresh design and simulation software news. And if it’s not fresh, we’ll douse it in so much sauce you won’t even notice.

For our first item, some spice.

I recently reported on the unexpected genre of CAD esports in The fastest 3D CAD modelers in the world. That story is about TooTallToby.com, where a dedicated community of 3D modelers, spanning many countries and software platforms, compete in CAD speed competitions. (Belated congratulations to RamBros, an Autodesk Fusion user from India, for winning the 2024 World Championship).

CAD speedrunning is now expanding to the next generation. TooTallToby.com has launched a tournament for CAD design students at Le Grand High School in Le Grand, California, that will play out through January (click here for the kickoff livestream from Friday, January 10).

I doubt any of my readers are eligible to compete, but I bring this up to share two thoughts.

One: I think these students will crush it. The top seed for the top speed competition last year was a high schooler, and he made it all the way to the semifinals. Even if none of the Le Grand students are currently CAD masters, I can’t think of a better way to motivate them to level up their game (for more on the pedagogical value of CAD speed modeling, read my original article).

Now the spice: I can’t help but wonder how long it will matter.

When will CAD skills, as we currently know them, become obsolete?

I don’t mean to sound cynical. Like I said, I’m sure these students are passionate about CAD and are on track to master it. But it reminds me of my grade school lessons in cursive writing—a skill that was clearly fading in importance even as we spent hours perfecting it.

CAD isn’t fading, but it is ripe for disruption. CAD software—pretty much across the board—has a stale, unfriendly interface that does little to actually aid designers. CAD has hit a wall, and rather than climb it, developers are shuffling sideways, changing how the software is licensed and packaged rather than how it works.

At some point, AI will change that. I’m not just talking about generative AI that makes 3D models from text prompts, though developers are eagerly seeking that grail. Even a little AI implemented well could transform the very nature of CAD, making the software less of a digital drafting table and more of a virtual design assistant.

When will that happen? What will it look like? Those are questions for Nostradamus (and you! Send me your predictions at malba@wtwhmedia.com). In the meantime, it’s nice to see the next generation of CAD users having fun with it.

An upstanding start to Siemens for Startups

Siemens has launched a new program for engineering and manufacturing startups called, sensibly, Siemens for Startups (I have to imagine “Xcubator” was on the table at some point*). Companies that are accepted to the program will get discounted Siemens software and the opportunity to collaborate with Siemens on development, marketing and more.

No cynical take on this one. My main reaction is surprise that this didn’t already exist—many engineering software providers offer startup programs with similar benefits. (Okay, here’s the cynical take: it’s good business to hook ‘em while they’re young.)

One novel bit about Siemens for Startups is that it’s linked with AWS Startup, Amazon Web Service’s startup program, meaning eligible companies will also get access to AWS cloud infrastructure.

Interested? The application process is open now.

*In other Siemens news, Zel X has been renamed NX X Essentials. Xciting!

Stay gold, Nvidia

It seems I can’t go a week without mentioning Nvidia. The chipmaker’s latest news is that it’s launching a “personal AI supercomputer” called Project DIGITS.

Coming this May, Project DIGITS is a $3,000+ PC (or should that be PAISC?) featuring Nvidia’s GB10 Grace Blackwell Superchip, which combines the Arm-based Grace CPU with a Blackwell GPU. The system will have 128 GB of memory and up to 4 TB of storage on board. It will run Nvidia’s Linux-based DGX OS and come preconfigured with the company’s AI software stack.

Project DIGITS is flashy inside and out. (Image: Nvidia.)

All that means users will be able to run large language models of up to 200 billion parameters, according to Nvidia. In true Nvidia fashion, you’ll also be able to link up two Project DIGITSes to crank that number up to 405 billion (I don’t know where the extra 5 billion parameters come from).

Learn about the latest BIM trends with me

Building. Information. Modeling. These aren’t just ordered excerpts from Merriam Webster. Together they describe the software tools behind modern design, engineering and construction workflows: BIM.

As with CAD, BIM is also in the midst of major changes. I want to learn more about them, and if you do too, I know just the place.

Sign up for Engineering.com’s upcoming webinar Design: Trends in BIM on Tuesday, January 21 at 12:00 PM EST. I’ll be there interviewing BIM expert Jennifer Schmitz of Short Elliott Hendrickson Inc. (SEH) about all the ways BIM is evolving alongside AI, digital twins, sustainability imperatives, and much more. Plus, you’ll get a chance to ask her any questions I don’t.

See you there!

One last link

Last week I left you with a link to 37 things that confuse me about 3DEXPERIENCE, written by Peter Brinkhuis of CAD Booster.

I guess I’m not the only one who enjoyed that blog post—Manish Kumar, CEO of Solidworks, acknowledged it on a recent LinkedIn post. An excerpt:

“We are humbled every day by the 7.5M+ users around the world who use and love our products and solutions. We are especially grateful to have true friends like Peter Brinkhuis, who challenge us to be even simpler. We take feedback like yours with humility and will continue to simplify our solutions further—always. Your feedback is deeply respected, and we will address it with a sense of urgency.”

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Fun times before the CAD revolution appeared first on Engineering.com.

]]>
For engineers, AI anticipation continues in 2025 https://www.engineering.com/for-engineers-ai-anticipation-continues-in-2025/ Tue, 07 Jan 2025 20:00:55 +0000 https://www.engineering.com/?p=135392 The hype isn’t going anywhere, but what about the products? It’s still a game of wait-and-see.

The post For engineers, AI anticipation continues in 2025 appeared first on Engineering.com.

]]>
Happy 2025! Welcome to Engineering Paper, a weekly column bringing you the latest design and simulation software news.

If you’ve gotten sick of the AI hype of the last couple years, I have bad news for you: it’s not going anywhere. Generative text and images may be old hat by now, but engineering software developers (and venture capitalists) are sprinting to bring AI into the third dimension.

That race got a little more crowded late last year when a startup called Backflip emerged from stealth with $30 million in funding from NEA and Andreessen Horowitz. Calling itself a “3D generative AI company,” Backflip offers a design platform that turns text prompts into 3D-printable models. The company was founded by the same duo that launched 3D printing company Markforged: Greg Mark, serving as Backflip’s CEO, and David Benhaim, CTO.

Backflip turns a text prompt into a 3D printed copper mug. (Image: Backflip.)

Backflip isn’t the first to try this. Last year I reported on Autodesk’s Project Bernini, a text-to-3D generator that’s theoretically impressive, but still far from being a practical design tool.

Is Backflip any better? I wish I could say. While the platform offers a free trial, there’s currently a waitlist due to “overwhelming demand.” I’ll weigh in when I can. (If you’ve tried it, let me know what you think at malba@wtwhmedia.com.)

AI for electrical engineers… maybe

I told you the AI fervor wasn’t going away. Another example comes from Cadstrom, a Canadian startup developing AI tools for PCB design validation that recently announced $6.8 million in seed funding. Cadstrom claims that their generative AI-based software will help electrical engineers avoid costly PCB redesigns and shorten design cycles by as much as 66 percent.

(Image: Cadstrom.)

I’ll believe it when I see it. I can’t help but be reminded of SnapMagic, formerly SnapEDA, which announced in 2023 that it had developed a generative AI for circuit design. Fourteen months later, I’m still waiting to see anything other than provocative screenshots. Let’s hope Cadstrom can deliver quicker.

Speaking of quick, Nvidia won’t slow down

There’s got to be something to all this AI hype, right? Well there’s certainly something in it for Nvidia, which has ridden the AI wave to become the second most valuable company in the world. The chipmaker made a characteristically dense series of announcements today at CES in Las Vegas focusing largely on products and partnerships in industrial AI.

Among those products are three new Omniverse Blueprints, reference workflows for developing AI-connected digital twins in the company’s Omniverse platform (here’s more on one of the first Blueprints Nvidia announced a couple months ago for real-time simulation).

The new Blueprints are Mega, for testing and developing robot fleets; Autonomous Vehicle (AV) Simulation, for AV developers to review and generate data; and Omniverse Spatial Streaming to Apple Vision Pro, which helps developers create apps to visualize digital twins on Apple’s mixed reality headset.

The partnerships include the usual who’s who of engineering software developers: Altair, Ansys, Cadence, Siemens and quite a few more. They’re all using Omniverse or integrating it into their own software in some way. Siemens, for example, just launched the Teamcenter Digital Reality Viewer, an app for photorealistic visualization powered by Nvidia Omniverse libraries.

Screenshot of the Teamcenter Digital Reality Viewer. (Image: Siemens.)

That’s just a taste of everything Nvidia announced at CES—for all the details, read the press release here. (Tip: you might want to keep the Nvidia glossary open in another tab.)

One last link

I’ll leave you with something that made me laugh recently: 37 things that confuse me about 3DEXPERIENCE, written by Peter Brinkhuis of CAD Booster.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post For engineers, AI anticipation continues in 2025 appeared first on Engineering.com.

]]>
Simulation trends for 2025: Get ready for AI and surrogate models https://www.engineering.com/simulation-trends-for-2025-get-ready-for-ai-and-surrogate-models/ Tue, 31 Dec 2024 17:50:52 +0000 https://www.engineering.com/?p=135274 Comsol’s Bjorn Sjodin explains the value of reduced order modeling, how chatbots can help simulation beginners and what better AI could lead to.

The post Simulation trends for 2025: Get ready for AI and surrogate models appeared first on Engineering.com.

]]>
Engineering.com recently spoke with Bjorn Sjodin, senior vice president of product management at Comsol, about his favorite features of Comsol Multiphysics 6.3, the latest version of the simulation platform.

Today we bring you some bonus questions and answers in which Sjodin muses on the biggest trends in simulation for 2025. Happy new year and happy simulating!

Bjorn Sjodin, senior vice president of product management at Comsol. (Image: Bjorn Sjodin via LinkedIn.)

This interview has been edited for clarity and brevity.

Engineering.com: How do you see AI being used in simulation over the next few years?

Bjorn Sjodin: One thing is that simulation engineers, especially beginners at this stage, are using AI chatbots. It’s like having a tutor that you can ask basic simulation questions. ‘If I’m doing this heat transfer simulation, which type of boundary conditions do I have to choose between?’ And the chatbot can give you some answers that will lead you on the way. Sometimes it hallucinates, you know, gives the wrong answer. But if it’s something that is common simulation knowledge by experienced engineers, then the chatbot will probably know some of that. So I think right now it will be very helpful for beginner users.

[One of Sjodin’s favorite features in Comsol Multiphysics 6.3 is a new integration with ChatGPT. -Ed.]

Another area where you can go beyond beginners already is when it comes to API programming. Many of our users would like to automate simulation tasks in various ways, either by writing simulation apps or just to write Java code to automate common tasks that they are doing repeatedly. Our users can use their models to build simulation apps that can be used by those that are not simulation experts, necessarily, but consumers of simulation technology.

Screenshot of a simulation app built with Comsol Multiphysics. (Image: Comsol.)

And to build those user interfaces, sometimes you would like to write some code. And the chatbots can help you in in programming tasks. They can help you debug your code. They can help you make your code more efficient. And that is something that we are seeing already. So using our new chatbot tool, for example, you can ask it, ‘how can I write a for loop that would automate this task?’ and the chatbot will answer back with code that you can paste into your Comsol model and run. Often it works—not always, sometimes it will give you the wrong answer—but very often it will give you very good answers. If you’re trained in how to guide it, then you can get very efficient and productive with those code snippets.

Do you think it’ll reach a point where it can build the app on its own?

Yes, I think so. That’s where everything is headed. The big question is how fast we will get to that point. No one can answer that, but yes, it is heading in that direction right now.

The biggest limitation that I see is that the current generation of chatbots, they don’t have particularly much spatial perception, so they don’t understand what a CAD model is. They don’t necessarily know the difference between a sphere and a cube and so on. But that will probably change. It can do some of that now, but not to any great extent. When that improves, then you will see more of this complete automation of modeling tasks. I think there’s still a lot of very difficult tasks that the simulation engineers will have to do. It might take decades before we get to full automation there.

In addition to AI, what are the other major simulation trends you’re seeing as we head into 2025?

There’s a lot of focus on reduced order modeling and what we call surrogate models, where you basically compactify your heavy simulations to get very lightweight models that can still give you the same results. They are precompiled, we should say. You precompile it for a wide range of parameters so that you don’t have to go to that full simulation that may take hours to run, and instead you get something that only takes seconds to run.

Screenshot of Comsol Multiphysics’ Model Builder. (Image: Comsol.)

The reason for that trend is that people want to build digital twins. They want to build fast simulation apps. They want to use more complicated models in system simulations. So all of that requires that you can make your models faster to run. And, yeah, neural networks come into play there, but also other technologies, more traditional reduced order modeling technologies.

Could you elaborate on some of those traditional technologies?

Yeah, the most classic technique is that you have maybe a structural mechanics simulation that requires you to solve for maybe a 10 million by 10 million matrix system. There are techniques based on eigenvalue, eigenfrequency analysis that will capture all of the most essential aspects of that model and bring it down from a 10 million by 10 million matrix to maybe a 100 by 100 matrix, and give you almost the same answer as the big model. And if you want to do a system simulation, then that’s all you need. You don’t need the full high fidelity model. You only need maybe some simple inputs and some simple outputs.

So there are traditional techniques for bringing down large system matrices to smaller ones by analyzing the matrix structure in various ways, and those are usually called reduced order models, or model order reduction, or model simplification. They are good for some cases. Neural networks are good for other cases. And there are other machine learning technologies that are useful, and hybrids of these as well.

How flexible are the neural network-based surrogate models? Do slight adjustments greatly impact their accuracy?

They are surprisingly good. You typically have a parametric model, maybe with five, six parameters, and you give it some range. So imagine that you have five, six parameters that are driving your simulation. These could be CAD parameters, material properties, boundary loads, etc. And they vary within max and min values. You feed that through the neural network, and the neural network starts sampling in this large parametric space and building up this neural network model.

As long as you’re staying within those parametric ranges that you pre-trained the model for, it is very good. It could be arbitrarily good. Actually, it could be just as good as the finite element model. The only question there is how much time you are willing to spend on training the model. Do you have five minutes or do you have 50 hours? The more time you have, the more accurate these models can be. So they are basically as good as you have time to train them.

You can give it to someone, and you don’t know which numbers they are typing in, but you know at least that they’re going to be in these intervals. Then they can get an answer in one second instead of one hour.

That’s a huge difference, especially if you have someone on the factory floor or in a production setting. They are not necessarily willing to wait for one hour for the simulation app to come back with an answer. They want an answer in five minutes, max—but probably seconds, and that’s what these surrogate models can provide you.

The post Simulation trends for 2025: Get ready for AI and surrogate models appeared first on Engineering.com.

]]>
The best updates in Comsol Multiphysics 6.3: “Let the computer do all the work for you” https://www.engineering.com/the-best-updates-in-comsol-multiphysics-6-3-let-the-computer-do-all-the-work-for-you/ Fri, 20 Dec 2024 17:14:50 +0000 https://www.engineering.com/?p=135087 Comsol’s Bjorn Sjodin shares his favorite features and how they’ll benefit users of the simulation platform’s latest version.

The post The best updates in Comsol Multiphysics 6.3: “Let the computer do all the work for you” appeared first on Engineering.com.

]]>
This month Comsol dropped the latest update to its simulation platform, Comsol Multiphysics version 6.3. With time-saving enhancements and a few long-awaited new features, the update may just be the best present Comsol users get this holiday season (okay, maybe not, but it definitely beats the colander your aunt is planning to regift you).

To learn more about the latest update and why it’s a big deal for Comsol Multiphysics users, Engineering.com sat down with Bjorn Sjodin, senior vice president of product management at Comsol. Sjodin took us through his favorite features of the new release and shared his insight into the evolving simulation landscape.

Bjorn Sjodin, senior vice president of product management at Comsol. (Image: Bjorn Sjodin via LinkedIn.)

The following interview has been edited for clarity and brevity.

Engineering.com: What are your favorite features of the new release?

Bjorn Sjodin: The new product is probably my favorite feature, the electric discharge module. It’s an add-on product to Comsol Multiphysics for simulating electric discharges.

For example, say you’re in your car and you get statically charged and you touch some of your control panels, and there’s a little spark that destroys electronics. Or it could be that someone is expanding equipment for next generation power grids for renewable energy and electric vehicles. Lots of new power system equipment is needed, and with that comes safety concerns. The electric discharge model can help in evaluating those new equipment designs for safety, so that people that service that equipment don’t get accidental discharges in their bodies.

Comsol Multiphysics’ new Electric Discharge Module analyzing the effect of a lightning impulse voltage on transformer oil. (Image: Comsol.)

How would users simulate that kind of thing prior to this release?

We have some very basic functionality for this in previous versions. You can use electromagnetic simulation to detect where there is risk for electric discharge by very simple means.

But with the electric discharge module, you can do it more efficiently. You can detect where there is risk for electric discharge, but you can now also simulate the actual discharge phenomena themselves for the first time. You couldn’t do that before, because the physics involved is very, very complicated, so it has to be pre-packaged for users to make it easy to use. And that’s what we have done in this new module. And we hide all the complicated physics behind the scenes so that the users get a friendly interface where they can enter their CAD geometries and their material parameters and what have you.

Is the electric discharge module something that users were requesting?

Yes they were. We have had users using our other tools for various discharge and discharge-like phenomena for 20 years, so this is something that people have requested for a long time.

The issue, though, has been with the speed of the computers. We could have done this many years ago, probably, but computers weren’t strong enough. Now they are. So now it is reasonable to do these types of simulations, because the phenomena that go into these are quite complicated. It’s a combination of electromagnetics, fluid flow and chemical reactions that requires a lot of computational power.

Is that generally how you plan your development roadmap, based on user requests and computing power?

Yes, those are certainly elements. How we decide products is kind of complicated. It depends on what are customers asking for? What is technically possible to do? Do we have the people on board that could produce such products? Those are usually the most critical factors. And of course, is there high enough demand?

Any other favorite updates in Multiphysics 6.3?

We have GPU support for the first time in Comsol in two different ways in 6.3. One is that we have GPU support for accelerating acoustic simulations. Transient phenomena in acoustics could be like you have some sound, some noise happening in a room or in a car, for example, and then this new technology makes it possible to see how the acoustic pressure rays are propagating through the room or the car over time. And it could be up to 25 times faster than previous versions without GPU, which is fantastic. That’s like between one and two orders of magnitude faster.

Simulating pressure acoustics in an office environment using Comsol Multiphysics v6.3. (Image: Comsol.)

We also have GPU support for another application, and that is to train the neural network models that people have started to use. Essentially, you build a finite element simulation first, and then you train one of these neural network models to replace the finite element simulation. So it’s not a replacement for the simulation. You have to do the simulation first, but then you train your neural network, and you use that as a fast replacement for the finite element simulation.

That gives you instantaneous results if you need that for some reason. Maybe you’re doing a system simulation. Maybe you are creating one of our simulation apps and you want your users to get instantaneous results. So you then pre-compile these simulations by using neural networks. That now has GPU support, so you can train these neural networks much faster than previous versions—20, 25 times faster than before.

Is that a typical speedup or more of a best case scenario?

It’s a typical speed up. In general, it will depend highly on the CAD geometry. Is it a uniform geometry? Is it very elongated? All of those kinds of different modeling aspects will affect the speed. It could vary from case to case. Could be faster, could be slower, depending on the details of the model.

You do have to invest, though, in a dedicated GPU card. You can’t just use any card. If you’re interested in GPU acceleration, you should invest in a high end card. Otherwise you don’t get that dramatic performance boost. We support Nvidia cards.

Do you support AMD graphics cards as well?

We don’t support AMD yet, but we will look into all possibilities for the future, of course.

Is the new electric discharge module also GPU-accelerated?

It’s not running on GPU yet, but we certainly hope to have that running on GPUs in the future. Multiphysics simulations have always been difficult to accelerate on GPUs, but new technologies are emerging now that will we think will make this possible more and more moving forward.

If you look at the GPU support that is out there in the simulation world, it’s usually for single physics phenomena, because that’s where it was easy. It’s true for anything simulation. You start with one physics, and then when that is mature enough, you go to two, and then three and four and so on. So it’s the same thing with GPU, which is a relatively new technology.

Can you tell me about the automated geometry preparation tools in Multiphysics 6.3? Are they based on AI?

That is based on what we call heuristic rules. I wouldn’t call it AI. That is more based on the experience we have built up over the years on what type of simplifications customers would like to have with CAD models to make their meshing easier, to make their simulations faster.

So that was possible before, but with manual means. Now, we have made it an option to let the computer do all the work for you, all the simplifications for you, automatically. And this is the first time we have done that. It’s very exciting and extremely useful for all of our large industrial customers who have complicated CAD geometries.

How much time could users save with these automated tools, as opposed to the manual way?

Best case scenario, you could go from hours to minutes.

The final new feature I want to ask you about is the chatbot. Is that some sort of ChatGPT-like model?

Yes,exactly. It’s not even like ChatGPT, it is ChatGPT. We connect to ChatGPT, so we don’t ship a chatbot with Comsol Multiphysics.

The idea here is that the user will connect to their own preferred chatbot. And in this first version of this, we allow connections with ChatGPT, but we hope to connect to other chatbots in the future as well. So the user will provide their subscription information in Comsol, and that will establish a link to ChatGPT. And we then prime the questions that you ask with context so ChatGPT understands that this question is coming from a Comsol Multphysics user from the Comsol Multiphysics user interface. So it knows what to send back. So what we have implemented is establishing the communication channel and the context by priming it with some prompts, basically.

Interested in more insight from Bjorn Sjodin on the latest trends in simulation? Stay tuned for a bonus Q&A (sign up for Engineering.com’s Simulation newsletter to make sure you don’t miss it).

The post The best updates in Comsol Multiphysics 6.3: “Let the computer do all the work for you” appeared first on Engineering.com.

]]>
Onshape startups to get SimScale free for 3 months https://www.engineering.com/onshape-startups-to-get-simscale-free-for-3-months/ Tue, 17 Dec 2024 17:31:21 +0000 https://www.engineering.com/?p=134978 Eligible Onshape users will get an extended trial of the cloud-based simulation software, plus more simulation news.

The post Onshape startups to get SimScale free for 3 months appeared first on Engineering.com.

]]>
SimScale free to Onshape startups for three months

SimScale announced that it will offer three free months of its cloud-based simulation software through the Onshape Startup Program. Eligible members of the program will have access to SimScale’s Professional tier, which provides structural, fluid, thermal and other simulations.

(Image: SimScale.)

“At SimScale, we’re committed to democratizing access to powerful simulation technology, empowering engineers and designers to make informed, data-driven decisions starting at the earliest stages of product development,” said David Heiny, SimScale CEO, in the company’s press release. “Our relationship with Onshape exemplifies this mission and provides start-ups with access to a complete, cloud-based toolkit that is flexible, accessible, and scalable.”

Separately, SimScale also announced a partnership with Hexagon that will see Hexagon’s Marc nonlinear finite element solver available through the SimScale platform.

ABB and ESS to develop simulation tools for automotive paint shops

Engineering firm ABB announced that it will collaborate with Engineering Software Steyr (ESS) to develop simulation tools for automotive paint shop operations. Typical automotive paint processes require more than 20 highly variable steps, according to ABB, which hopes that making simulation tools accessible to more companies will help reduce the time and cost of the painting process.

(Image: ABB.)

“Delivering faster and more energy efficient solutions for the paint process is the final piece of the puzzle in digitalizing the manufacturing transition in the automotive industry,” said Marc Segura, president of ABB’s robotics division, in a press release. “The innovative solutions we are developing with ESS will cut vehicle development time by up to a month and generate cost savings of up to 30 percent, making manufacturers more competitive, efficient and resilient.”

ABB has also taken on a minority investment in ESS, but did not disclose financial details.

UniPlot now available as a Matlab add-on

Matlab and Simulink users can now access a connector for UniPlot, a data analysis tool that provides advanced visualization, data filtering, automated report creation and more. The connector, called UniPlot As PostProcessor in Matlab’s add-on explorer, opens after every Simulink simulation to give users quick access to their data in UniPlot. A UniPlot license, available in perpetual and subscription versions, is required.

(Image: UniPlot.)

Altair and Auburn University collaborate to advance vortex rocket engines

Altair says it will work with Auburn University’s Samuel Ginn College of Engineering on a $1.25 million AFWERX Phase II STTR contract to advance vortex rocket engines. The simulation developer, which is in the process of being acquired by Siemens, took over the collaboration from Research in Flight, its own acquisition from April 2024. Research in Flight’s technology is now a part of Altair’s HyperWorks platform and is called Altair FlightStream.

(Image: Altair.)

“FlightStream empowers users in unique ways, bridging the gap between high-fidelity CFD simulations and engineering demands to set industry standards for efficiency, accuracy, and speed,” said Pietro Cervellera, senior vice president of aerospace and defense at Altair, in the company’s press release.

The post Onshape startups to get SimScale free for 3 months appeared first on Engineering.com.

]]>
Comsol launches Multiphysics version 6.3 https://www.engineering.com/comsol-launches-multiphysics-version-6-3/ Tue, 03 Dec 2024 15:12:56 +0000 https://www.engineering.com/?p=134538 Plus the latest simulation news from Ansys and Eon Reality in this Engineering.com roundup.

The post Comsol launches Multiphysics version 6.3 appeared first on Engineering.com.

]]>
Comsol releases Multiphysics v6.3

Comsol announced the latest release of its simulation software, Comsol Multiphysics v6.3. The updated software includes new capabilities and performance enhancements for electromagnetics, structural mechanics, acoustics, fluid and heat, and chemical and electromechanical simulations, as well as new geometry tools and general updates.

One notable addition to Comsol Multiphysics v6.3 is the new Electric Discharge Module, which can analyze electric discharges and breakdowns in solids, liquids and gases.

Comsol Multiphysics’ new Electric Discharge Module analyzing the effect of a lightning impulse voltage on transformer oil. (Image: Comsol.)

Another release highlight is that Comsol Multiphysics’ Acoustics Module now supports GPU acceleration, which Comsol says will enable time-domain simulations of pressure acoustics to run up to 25 times faster. “The new GPU support for transient acoustic simulations is invaluable for engineers working on automotive sound systems or optimizing acoustics in office and residential spaces,” said Mads J. Herring Jensen, development manager at Comsol, in the company’s press release.

Simulating pressure acoustics in an office environment using Comsol Multiphysics v6.3. (Image: Comsol.)

Comsol Multiphysics v6.3 also adds new automated geometry preparation tools that will detect and remove small details and gaps in CAD models, which Comsol says will enable robust mesh generation to streamline model development.

Ansys integrates Nvidia Modulus AI into semiconductor simulation suite

Ansys announced that its semiconductor simulation products will integrate Nvidia Modulus, a physics-based framework for training and deploying AI models. According to Ansys, the integration will allow engineers to develop customized generative AI surrogate models to help them explore a larger design space for products including GPUs, HPC chips and other integrated circuits.

Ansys says it plans to integrate Modulus-created AI accelerators in RedHawk-SC, Totem-SC, PathFinder-SC and RedHawk-SC Electrothermal. The company claims in its press release that it has demonstrated over a 100x speedup for thermal simulations using the AI-enhanced process.

Ansys on-chip thermal predictability with NVIDIA Modulus. (Image: Ansys.)

“Nvidia Modulus makes it easy to train and deploy AI models that are physics-informed and reflect real-world causality,” said Tim Costa, senior director of CAE, EDA & quantum and HPC at Nvidia, in the Ansys press release. “The integration with Ansys simulation products for multiphysics semiconductor design are ideal applications for Modulus to enhance simulation speed and efficiently identify the best design solutions.”

In other Ansys news, the company has been awarded a contract with the Microelectronics Commons to provide its simulation software to the Department of Defense–backed network.

Eon Reality launches Procedural Skill Simulator

Eon Reality launched the Procedural Skill Simulator as part of its Knowledge Simulator platform, an AI-assisted augmented and virtual reality (AR/VR) training solution. The new simulator is designed to train professionals in manufacturing, aviation, healthcare and beyond, according to Eon Reality. The platform employs a three-phase learning approach that includes demonstration, practice and assessment.

(Image: Eon Reality.)

“By combining AI-driven guidance with immersive visualization, we’ve created a learning environment that dramatically accelerates skill acquisition while ensuring absolute precision and safety compliance,” said Dan Lejerskar, chairman of Eon Reality, in the company’s press release.

The post Comsol launches Multiphysics version 6.3 appeared first on Engineering.com.

]]>
The fundamentals of mixed reality (MR) for engineers https://www.engineering.com/the-fundamentals-of-mixed-reality-mr-for-engineers/ Tue, 26 Nov 2024 16:34:41 +0000 https://www.engineering.com/?p=134337 Combining both augmented and virtual reality, MR provides a convenient spatial computing experience for engineers.

The post The fundamentals of mixed reality (MR) for engineers appeared first on Engineering.com.

]]>
Sometimes straight reality is hard to take. Mixing can help.

Mixed reality (MR) is a type of spatial computing that incorporates both augmented reality (AR) and virtual reality (VR). MR is useful as a descriptor for certain head-mounted displays (HMDs) that can smoothly transition between both AR and VR experiences.

Like AR and VR, MR offers exciting new ways for engineers to work and collaborate. Here’s what you need to know.

What is mixed reality (MR)?

If you already understand the concepts of augmented and virtual reality, mixed reality is a natural complement. (And if you don’t understand AR and VR, make sure to read The fundamentals of augmented reality for engineering and The fundamentals of virtual reality for engineering for a primer.)

While AR presents an overlay of virtual elements onto the real world, and VR presents an entirely virtual world, MR allows users to choose the level of reality that works for them. MR quite literally offers a slider between the real world and a virtual world.

MR is enabled by head-mounted displays with outward facing cameras that record the real world. Unlike a standard VR headset, which is like a heavy blindfold, a user wearing an MR headset can still see what’s in front of them and interact in real time with the real world. The HMD is effectively see-through.

The Apple Vision Pro headset isn’t actually see-through, but it does its best to fool the user and the outside world. (Image: Apple.)

Of course, an expensive and sensor-laden headset isn’t much use if all you want to do is see what’s right in front of you. The real magic of MR is that it can use all or none of the real world in its output, depending on the user’s preference and application.

For example, an MR user could pull up a virtual computer monitor in their real office, typing on it with their physical keyboard that they can still see. At lunchtime they could turn off the real world entirely and enter a virtual movie theater to catch the latest episode of End of the Line. And they could switch back to full reality when their spouse knocks on the door to share the cool news they just learned on This Week in Engineering. (Some MR headsets even display fake eyes on the outside for when you’re talking to real people, though whether this is helpful or creepy is a matter of debate.)

How are engineers using MR?

Engineers can use MR in the same ways that they use AR and VR.

For instance, design engineers can pull up virtual 3D models to examine their designs at life size and in a real (or virtual) environment. Engineers planning a new factory can use MR to visualize different configurations of equipment and pick the best option for efficiency, worker ergonomics or other factors. Engineers and architects can use MR to walk through virtual buildings to better understand and optimize the layout before construction.

MR doesn’t offer any fundamentally new capabilities over AR and VR, but the ability to switch between the two modes of spatial computing adds convenience and flexibility. Why choose one or the other when you could have both at the nudge of a slider?

The post The fundamentals of mixed reality (MR) for engineers appeared first on Engineering.com.

]]>
The fundamentals of virtual reality (VR) for engineering https://www.engineering.com/the-fundamentals-of-virtual-reality-vr-for-engineering/ Thu, 21 Nov 2024 13:30:00 +0000 https://www.engineering.com/?p=133927 This transformative technology opens new worlds for product design and beyond. Here’s how it works and how to get started.

The post The fundamentals of virtual reality (VR) for engineering appeared first on Engineering.com.

]]>
Virtual reality (VR) is opening new worlds for engineers.

The impressive technology has come far in the past few decades of research and development. Today, users can strap on a head-mounted display (HMD) and enter a virtual world that looks strikingly realistic—or fantastically alien. The possibilities of VR are wide open, but it’s already proving to be a practical and versatile tool for engineering and manufacturing professionals.

What is virtual reality (VR)?

Virtual reality represents one extreme of the virtuality continuum, the theoretical framework underpinning spatial computing. It’s balanced on the other side by reality (real reality, if you like), and the two poles are bridged by mixed reality technologies such as augmented reality.

VR immerses the user in a simulated, computer generated world. The computer is (or is connected to) a head-mounted display (HMD), a special headset with two screens, one for each of the wearer’s eyes. Using the same principal as 3D movie glasses, each screen shows the user a slightly different image to create the illusion of depth in the simulated world.

This illusion can be incredibly effective. A high-end HMD and well-made virtual environment can convince the user that he or she is really in another place (the genre of horror VR games demonstrates this well—you can easily find footage of players ripping off their headsets when the game starts to feel a little too real).

How do users interact with a virtual world? Most VR applications are controlled with one or two hand controllers held by the user. The motion of these controllers is tracked and often represented in VR with a laser-like beam that can point at elements in the VR world. To move through the world, there are two common approaches: users either push a button on the controller to “walk” forward or simply point and click to teleport to a new spot.

Some VR headsets track the user’s eye and hand movements and use glances and gestures to interact with the application. For an even more immersive experience, omnidirectional treadmills allow users to physically walk through the virtual environment.

None of these control methods have emerged as a clear standard, so the user interface for virtual reality will likely continue to evolve along with VR hardware and software.

How are engineers using virtual reality?

As with augmented reality, engineers can use virtual reality to get up close and personal with their designs. Rather than spinning a 3D model in a CAD program, engineers can put on a VR headset and see the same model in a virtual world. They could walk around it to see it from different angles and in true-to-life size. They could even hide parts, adjust materials, explode the assembly or examine section views—not on a flat computer screen, but in immersive virtual world.

VR headsets with eye tracking allow engineers to evaluate how users would interact with a design. For example, engineers designing the layout for an airplane cockpit could learn where the pilot’s eyes are most drawn and optimize the design accordingly. Similarly, VR can help designers optimize the ergonomics of a human-operated machine by revealing how test users interact with the virtual model.

VR can also serve as a virtual meeting space for collaborative design. Engineers from anywhere in the world could put on a headset and join their colleagues in reviewing a model, no long-haul flights required. Think of it like a three dimensional Zoom meeting with the 3D equivalent of screen sharing.

The same idea is used in the architecture, engineering and construction (AEC) industry to walk through virtual buildings. Architects, engineers, clients and other stakeholders can use VR to get a spatially accurate feel for a design before it’s built. This is a far leap beyond reviewing architectural drawings and renderings. As with engineering design reviews, the immersivity of VR is what adds immeasurable value.

How engineers can get started with virtual reality

You can’t visit virtual reality without a VR headset, but there are plenty of options to choose from. These range from consumer-targeted products for a few hundred dollars to enterprise VR headsets that provide better resolution and responsiveness but cost thousands of dollars. Some VR headsets are self-contained computers with internal processors, while others depend on a connection to a GPU-equipped engineering workstation.

There’s plenty of software for engineering users of VR. Some CAD programs support VR directly, allowing designers to easily switch to a virtual view of their models. Other software caters to VR design reviews with features for collaboration and markup. Game engine software, sometimes called real-time 3D software, can be used to develop custom VR experiences using existing CAD models.

Though there’s an upfront cost to getting started with VR—both in the price of VR headsets and software as well as the learning curve for users—for many engineers, the cost is well worth it. VR provides an unparalleled way to visualize and refine a design, and has thus become an integral part of many engineering workflows.

The post The fundamentals of virtual reality (VR) for engineering appeared first on Engineering.com.

]]>