• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

3D CAD World

Over 50,000 3D CAD Tips & Tutorials. 3D CAD News by applications and CAD industry news.

  • 3D CAD Package Tips
    • Alibre
    • Autodesk
    • Catia
    • Creo
    • Inventor
    • Onshape
    • Pro/Engineer
    • Siemens PLM
    • SolidWorks
    • SpaceClaim
  • CAD Hardware
  • CAD Industry News
    • Company News
      • Autodesk News
      • Catia News & Events
      • PTC News
      • Siemens PLM & Events
      • SolidWorks News & Events
      • SpaceClaim News
    • Rapid Prototyping
    • Simulation Software
  • Prototype Parts
  • User Forums
    • MCAD Central
    • 3D CAD Forums
    • Engineering Exchange
  • CAD Resources
    • 3D CAD Models
  • Videos

Inventor

Autodesk Ships Nastran 2015, Nastran In-CAD 2015

August 12, 2014 By Barb Schmitz Leave a Comment

There’s been a big push by simulation software vendors to get engineers and designers to start incorporating analysis tools into their product development processes. High-end simulation tools have traditionally been used by specialists or analysts who’s jobs are to run design geometry–created by engineers–through their paces using analysis tools to validate that designs will be structurally sound and will operate as intended once built.

The motive is obvious. There are many more design engineers than there are analysts so making their products more engineering-centric opens up much bigger potential markets for simulation vendors. There are also, however, many compelling reasons for engineers to use analysis tools early in the design process. Doing so speeds up development, cuts time to market, and helps them identify potential design flaws long before costly physical prototypes are built.

Autodesk Nastran is an industry-recognized FEA solver for analyzing linear and nonlinear stress, dynamics and heat transfer characteristics of structures and mechanical components.
Autodesk Nastran is an industry-recognized FEA solver for analyzing linear and nonlinear stress, dynamics and heat transfer characteristics of structures and mechanical components.

New versions of Nastran solver released

One of these high-end tools is Nastran, finite-element analysis (FEA) software now sold by Autodesk after its acquisition of NEi Software back in May. The goal of the acquisition was to expand the company’s structural analysis capabilities, and it follows similar strategic technology acquisitions in the computational fluid dynamics (CFD), plastics and composites solutions spaces.

Autodesk Nastran offers an industry-recognized FEA solver for analyzing linear and nonlinear stress, dynamics and heat transfer characteristics of structures and mechanical components. Nastran provides real-time results and changes in solution parameters while solving, which helps engineers and analysts gain accurate results to complex simulations.

Autodesk Nastran In-CAD 2015 is a CAD-embedded, general-purpose FEA tool powered by the Autodesk Nastran solver. The new Nastran In-CAD offers a wide range of simulation spanning across multiple analysis types, delivering another high-end simulation in a CAD-embedded workflow. The software works within both Autodesk Inventor and SolidWorks 3D CAD software systems.

Taking FEA to the Cloud

Autodesk Nastran Solver is available to customers using the Autodesk Simulation Mechanical and Autodesk Simulation Flex product offerings. Autodesk Simulation Flex, formerly Autodesk Sim 360 Pro with Local Solve, consists of:

* Autodesk Simulation Mechanical with cloud-enabled FEA tools for static stress, linear dynamic analysis and mechanical event simulations;
* Autodesk Simulation CFD Motion including Design Study environment and 3D CAD connectors with cloud-enabled CFD tools for fluid flow and thermal simulations; and
* Autodesk Robot Structural Analysis with cloud-enabled simulation for detailed analysis and code checking on a range of structures, including buildings and frame structures.

“We’ve been working with Autodesk tools since the acquisition of Algor and CFDesign and have seen first-hand how incredibly powerful the combination of strong numerical solvers and Autodesk’s advanced visualization, cloud and user interface tools can be,” said Dmitriy Tseliakhovich, Co-founder, CEO and CTO at Escape Dynamics. “Nastran is a great solver with very powerful non-linear and dynamic simulation capabilities so its integration with Autodesk’s front end and elastic cloud computing platform is extremely exciting.”

Autodesk Nastran and Autodesk Nastran In-CAD are now available. For more details about both products and licensing and pricing options, click here.

Barb Schmitz

Filed Under: Autodesk, CAE, CFD, News Tagged With: Autodesk, CFD, FEA, Inventor, SolidWorks

Autodesk Ships Integrated CAM Package For Inventor

April 8, 2014 By Barb Schmitz Leave a Comment

There has been a real need in the manufacturing industry among users to have a tightly integrated CAM package that works hand in hand with users’ CAD systems. Users wanted CAM functionality but didn’t want to learn a whole new tool. Today, Autodesk announces that Inventor users now have such a package.

Autodesk got to this point after a key acquisition in October 2012 of HSMWorks technology, which was originally developed for SolidWorks users. Though there was skepticism in the market–and fear among HSMWorks users–regarding what the company’s real intentions were with the technology, Autodesk committed to continue development of the technology and announced plans to integrate it into the entire portfolio of desktop and cloud-based products.

An integrated CAM product for multiple users

HSM 2015 will help machinists, designers and engineers turn their Inventor models into manufacturable parts by generating machining tool paths from directly within Inventor.

Inventor HSM 2015 includes a full license of Inventor 2015 software, so users would have a complete CAD/CAM package with integrated design-to-manufacturing capabilities.

Inventor HSM includes the following features:

* Flexible 2.5D, 3D, and 3+2 toolpath options and settings for the best possible surface finish
* Simulation tools that help users verify the machining process before CNC programs are run on a machine * Highly customizable post-processors and a powerful CNC editor that enable users to tailor their programs to their CNC machine.

Filed Under: Autodesk, Autodesk News, News Tagged With: cam, Inventor

The failed promise of parametric CAD, final chapter: A viable solution

November 18, 2013 By Evan Yares 5 Comments

Model reuseWhat is the failed promise of parametric CAD? In short, model reuse.

It’s a lot more difficult than it ought to be, for a variety of reasons. Several months back, I wrote a series of articles discussing those reasons, as well as some of the solutions that have come up over the years.  What was missing from the series was a final chapter; a detailed description of what could prove to be a viable solution to problems with model reuse: the resilient modeling strategy.

The resilient modeling strategy (RMS) is the brainchild of Richard “Dick” Gebhard. I wrote about Dick last June, in the article A Resilient Modeling Strategy. He’s a low-key guy with deep experience and serious expertise in the practical use of MCAD software. Over his career in CAD, he’s been a reseller for CADKEY, Pro/E, and most recently, Solid Edge.

RMS is a best practice for creating CAD models that are stable and easily reusable (even by inexperienced users.)  It can be learned and easily used by typical CAD users, it preserves design intent in models, and provides a mechanism by which managers or checkers can quickly validate a model’s quality.

Resilient Modeling Strategy

When Dick first started thinking about the concepts that make up the resilient modeling strategy, it was natural that it was in the context of showing the advantages of Synchronous Technology (The Siemens PLM brand name for its version of direct modeling.) In our discussions about RMS over the last year or so, I pointed out that, while I thought that RMS did indeed demonstrate the benefits of hybrid history/direct modeling in Solid Edge, for it to be taken seriously, and not be unfairly dismissed as a marketing initiative for Solid Edge, it needed to work with a wide variety of MCAD tools. I think Dick got where I was coming from, because he’s continued to refine and generalize RMS, with feedback from users of a number of MCAD systems.

In its current incarnation, RMS works particularly well with Solid Edge, as might be expected, but also works very well with Creo, NX, CATIA, and IronCAD (all of which are hybrid history/direct systems.) Further, with a few modifications, it can provide compelling value with SolidWorks, Inventor, and Pro/E (all of which are primarily history-oriented systems.)

It’s significant that RMS is also free to use. While Dick is available to provide presentations, seminars, and training, he has not attempted to patent, or keep as trade secrets, the underlying concepts of RMS. (He does claim a trademark on the term “Resilient Modeling Strategy,” which means that organizations offering commercial training on RMS will need to get Dick’s OK to use the term.)

Dick has posted an introductory presentation on RMS at resilientmodeling.com. While the entire presentation is 20 minutes long, the first 3-1/2 minutes cover the problems that people invariably experience when reusing or editing history-based CAD models. Watching that much will likely convince you to watch the rest.

On Wednesday, November 20, at 10:00 AM PST, Dick will be hosting a webinar on RMS. It’s scheduled to last just 30 minutes, with the emphasis on content, not hype. If you’re a serious CAD user or a CAD manager (or, for that matter, you work for an MCAD developer), it’ll be well worth your time to attend.

TL;DR: Resilient Modeling Strategy is a best practice for creating high quality reusable 3D MCAD models. It works with many CAD systems, it’s easy to learn and use, and it’s free. Big payoff for MCAD users. 

Presentation at resilientmodeling.com

Register for Nov 20 webinar on Resilient Modeling

 

 

 

Filed Under: Catia, Creo, Evan Yares, Featured, Inventor, News, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: 3D CAD, Catia, Dassault Systemes, Evan Yares, Inventor, IronCAD, PTC, Siemens PLM, Solid Edge, SolidWorks

The failed promise of parametric CAD part 5: A resilient modeling strategy

June 25, 2013 By Evan Yares 3 Comments

bamboo-gardenThe model brittleness problem inherent with parametric feature-based modeling is a really big deal. And it’s something, honestly, that I don’t have a great answer for. I’ve even asked a few power users who I know, and their answers seemed to involve a bit of hand-waving, and a reference to having lots of experience.

While best practices are a potentially good step forward, they need to be straightforward enough that mere mortals (as opposed to power users) can follow them.

Around Christmas last year, I got a call from Richard Gebhard, an engineer’s engineer, who has made his living selling CAD, and training people to use it (including more than his fair share of power users), for longer than he would like me to admit. (I’m pretty sure I’ve been in the CAD industry longer than him, though.) Richard told me he had something he wanted to show me, and if I’d take the time to meet him, he’d buy me lunch.

What Richard showed me was a way of creating and structuring CAD models that made a lot of sense. It not only reduced parent-child dependencies, but it made them more predictable. And, more importantly, it made it a lot easier for a mere mortals to scan through the feature tree, and see if there were any grues (it’s a technical term. Feel free to look it up.)

Over the next several months, we had lunch several times. I made suggestions. He rejected some, accepted some, and thought about others. At the same time, he was bouncing his ideas off several of his best power users (including his son). By a couple of months ago, he had refined his system to the place where it would work impressively well with nearly any parametric feature-based CAD system. So, he went to work finalizing his presentation.

I had mentioned that Delphi, by patenting some of the elements of horizontal modeling, limited the number of people who could benefit from it. (Worse for them, they patented it, then filed bankruptcy. That didn’t help much.) Richard’s goal wasn’t to monetize his process. His goal was to evangelize it. To help CAD users—both power users and mere mortals—to get their jobs done better.

Richard and I had talked, over time, about what he should call this process. At first, I liked the word “robust.” In computer science, it is the ability of a system to cope with errors during execution. In economics, it is the ability of a model to remain valid under different assumptions, parameters and initial conditions. Those are good connotations. But, then I thought of one of my favorite examples of robustness. The first time I visited Russia, I noticed that the apartment buildings were built of thick poured concrete. Very robust. And nearly impossible to remodel.

Richard’s system wasn’t robust. It was resilient. So, he has named it the Resilient Modeling Strategy. RMS.

So far, I’ve written over 2,600 words, to provide some background on the problems of parametric modeling, and some of the solutions that have been offered over the years. But, after all that, I’m not going to tell you anything more about RMS. At least, not yet.

Tomorrow, Wednesday, June 26, Richard will present RMS for the first time ever, at Solid Edge University, in Cincinnati, Ohio. His presentation will start at 9:00AM local time, and will be in room 6 of the convention center. If you’re there, put it on your calendar. If not, you’ll need to wait until Richard gets back to Phoenix, and I publish a follow-up post.

RMS is not anything difficult, or fundamentally new. It’s just an elegant distillation of best practices, designed to work with nearly any parametric CAD system, and simple enough that it doesn’t get in the way.  It’ll help you make better CAD models faster.

Filed Under: Alibre, Autodesk, Creo, Design World, Evan Yares, Featured, Inventor, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: Creo, Inventor, IronCAD, Solid Edge, SolidWorks

The failed promise of parametric CAD part 4: Going horizontal

June 25, 2013 By Evan Yares 12 Comments

In the early 90s, Ron Andrews, a senior product designer at Dephi’s Saginaw Steering Systems Division, became fed-up with the difficulties of editing parametric CAD models. So, he and a team of his colleagues, including Pravin Khurana, Kevin Marseilles, and Diane Landers, took on a challenge of trying to find a solution.

They came up with an interesting concept that they called horizontal modeling. Here’s a description of it from their patent abstract:

“Disclosed is a horizontal structure method of CAD/CAM manufacturing where a base feature is provided and one or more form features added to it to form a model. The form features are added in an associative relationship with the base feature, preferable a parent child relationship, but are added in a way as to have substantially no associative relationships with each other. The result is a horizontally-structured Master Process Model where any one form feature can be altered or deleted without affecting the rest of the model. Extracts are then made of the Master Process Model to show the construction of the model feature by feature over time. These extracts are then used to generate manufacturing instructions that are used to machine a real-world part from a blank shaped like the base feature.”

Here’s a picture that makes it clearer:

Horizontal Modeling

The simplest explanation I can give for it is this: You create a base feature, and bunch of datum (working) planes. You attach all the child features to those datum planes. Viola: no parent-child problems.

I admit that I’m not going to do justice to horizontal modeling in this conversation. There’s actually quite a bit to it, and it makes a lot of sense when coupled with computer-aided process planning (CAPP.)

Horizontal modeling has a handful of problems. First, it does a pretty good job of killing the possibility of having design intent expressed in the feature tree. Next, it works better with some CAD systems than others. (When horizontal modeling was in the news, SolidWorks had a problem managing the normals on datum planes, so it didn’t work too well.) The deadliest problem is that Delphi got a bunch of patents on the process, then licensed it to some training companies. From what I can see (and I may be wrong), none of these training centers offer horizontal modeling classes any more.

While, technically, you can’t use horizontal modeling without a patent license from Delphi, the concepts at its core are fairly similar to things that CAD users have been doing for years. A few years ago, Josh Mings posted on a couple of online forums that “Horizontal Modeling is just one word for it, you may also know it as Skeleton Modeling, Tier modeling, Sketch Assembly modeling, CAD
Neutral Modeling, or Body Modeling.” (It’s actually two words for it, but I get his point.)

Horizontal modeling is not a silver bullet solution for the problems inherent in parametric feature-based CAD. It’s just a best practice—a strategy for getting around the problems. It seems to be headed in the right direction, but it suffers from the complexity that comes from trying to fix too many problems at once.

Next: A Resilient Modeling Strategy

Filed Under: Alibre, Autodesk, Creo, Design World, Evan Yares, Featured, Inventor, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: Creo, Inventor, IronCAD, Solid Edge, SolidWorks

The failed promise of parametric CAD part 3: The direct solution

June 25, 2013 By Evan Yares 5 Comments

Pull-PushDirect modeling—a syncretic melding of concepts pioneered by CoCreate, Trispectives, Kubotek (and many others)–has shown the most promise to cure the parametric curse.

Direct modeling is today’s hot CAD technology. PTC, Autodesk, Siemens PLM, Dassault (CATIA, but not so much SolidWorks), IronCAD, Kubotek, Bricsys, SpaceClaim (and certainly some other companies I’ve forgotten) all have their own unique implementations of it.

The common thread in direct modeling is to use standard construction techniques when modeling, and feature inferencing (or recognition) when editing. It’s easier said than done. It’s taken about 35 years of industry research to get to the place we are today—where you can click on a face of a model, and the system will recognize that you’re pointing to a feature that has some semantic value. And that’s not even considering the tremendous amount of work that has been required by legions of PhD mathematicians to develop the math that lets you push or pull on a model face, and have the system actually edit the geometry it in a useful manner.

For the CAD software, figuring out which way to edit a selection is almost a mind reading trick: A user clicks and drags on a part of a model. What would they like to happen? In some cases it’s easy: Drag once face of a rectangular block, and the system will just make it longer or shorter. But if the block is full of holes, bosses, and blends, it becomes a lot more complicated. What should the system do if you drag a face so far back that it consumes another feature, and then pull it back to where it was? Should the consumed feature be lost forever, or should the system remember it in some way, so it can be restored?

There are no right answers. It seems that no two direct modeling systems handle the decision of what is a “sensible” edit in the same way.

While direct modeling absolutely solves the model brittleness problem inherent with parametrics, it does it by simply not using parametrics. Even with hybrid parametric/direct CAD systems, the answer to the parametric curse is still to not use parametrics when you don’t need to.

The solution of “use direct modeling when you can, and learn to live with parametric hassles when you can’t” just isn’t very satisfying to me.

Next: Going horizontal

Filed Under: Alibre, Autodesk, Creo, Design World, Evan Yares, Featured, Inventor, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: Creo, Inventor, IronCAD, Solid Edge, SolidWorks

The failed promise of parametric CAD part 2: The problem is editing

June 25, 2013 By Evan Yares 4 Comments

ErasermIn the previous post, I wrote about the failed promise of parametric CAD: problems such as parent-child dependencies and unwanted feature interactions, coupled with no easy way to either prevent, or check for them.

The difference between modeling and editing in a parametric CAD system is simply the difference between creating things from scratch, and modifying things you’ve already created. The distinction may seem academic, but it is only when editing that parent-child dependencies are a potential problem.

Consider a scenario, of creating a parametric part—one that you’ve worked out in your head pretty well ahead of time—where you start from scratch, modeling sequentially, and spending all your time working on the most recent feature without needing to go back to edit upstream features.

In that context, the model’s parent-child dependencies would exist, but would be benign. They’d never get in your way. That is, until you went back to edit the part.

In most cases, people don’t build models from scratch without periodically going back to adjust earlier features from time to time. In that process, they’ll catch, and be able to deal with, some of the dependencies. But not likely all, or even most, of them.

I’ve heard experienced CAD people use an interesting term for models with hidden and untested parent-child dependencies: Parts from hell. When you’re trying to modify them, you never know when a small change might cause them to completely fall apart. I think a better, more descriptive, term is brittle: Hard, but liable to break or shatter easily.

This also suggests a descriptive term for CAD models which are not liable to break or shatter easily: resilient.

I’ve only ever seen one group of users who could consistently create complex yet resilient parametric parts models from scratch: PTC application engineers from the early to mid-1990s. Of course, they could only do it during customer benchmarks, with parts they’d practiced ahead of time, where they had worked-out and memorized all the steps, and where they had a good idea of the parameter ranges. Even then, if you were to ask them to change a dimension that would cause a topological change, the models might unceremoniously blow up.

Not to paint too bleak a picture, there are certainly CAD power users who have the skills to create resilient CAD models. I’ve met more than a few of them: true professionals, who by combining experience, insight, and education, have earned the respect of their peers. They understand how to structure CAD models to avoid any problems with brittleness.

Nah. I’m just messing with you. Power users struggle with this just like us mere mortals. It’s just that their models don’t usually fall apart until you go outside the scope of parametric changes they had anticipated. Give power user’s carefully crafted CAD model to a user who has a black thumb (I’m sure someone comes to mind), and they’ll find ways to blow it up that the power user never imagined.

Next: The direct solution

Filed Under: Autodesk, Creo, Design World, Evan Yares, Featured, Inventor, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: Creo, Inventor, IronCAD, Solid Edge, SolidWorks

The failed promise of parametric CAD part 1: From the beginning

June 25, 2013 By Evan Yares 28 Comments

The modern era of 3D CAD was born in September 1987, when Deere & Company bought the first two seats of Pro/Engineer, from the still new Parametric Technology Corporation. A couple of years later, Deere’s Jack Wiley was quoted in the Anderson Report, saying:

“Pro/ENGINEER is the best example I have seen to date of how solid modelers ought to work. The strength of the product is its mechanical features coupled with dimensional adjustability. The benefit of this combination is a much friendlier user interface plus an intelligent geometric database.”

According to Sam Geisberg, the founder of PTC:

“The goal is to create a system that would be flexible enough to encourage the engineer to easily consider a variety of designs. And the cost of making design changes ought to be as close to zero as possible. In addition, the traditional CAD/CAM software of the time unrealistically restricted low-cost changes to only the very front end of the design-engineering process.”

To say Pro/E was a success would be a terrible understatement. Within a few years PTC was winning major accounts from the old-line competitors. In 1992, on the strength of its product, PTC walked away with a 2,000 seat order from Caterpillar that Unigraphics had thought was in the bag.

The secret to Pro/E’s success was its parametric feature-based solid modeling approach to building 3D models. To companies such as Deere and Caterpillar, it offered a compelling vision. Imagine being able to build a virtual CAD model of an engine, and, by changing a few parameters, being able to alter its displacement, or even its number of cylinders. And even if that wasn’t achievable, it would be a great leap forward to just be able to rapidly create and explore design alternatives for parts and assemblies.

Yet, things were not that easy. In 1990, Steve Wolfe, one of the CAD industry’s most insightful observers, pointed out that Pro/E was incapable of making some seemingly simple parametric changes.

Pro/Engineer placed limits on the range of parameters. (A designer could not increase the dimension of L2 to point that L3 vanished.)
Pro/Engineer placed limits on the range of parameters. (A designer could not increase the dimension of L2 to point that L3 vanished.)

David Weisberg, editor of the Engineering Automation Report (and from whose book, The Engineering Design Revolution, I have liberally cribbed for this article), pointed out the fundamental problem with parametrics:

“The problem with a pure parametric design technique that is based upon regenerating the model from its history tree is that, as geometry is added, it is dependent upon geometry created earlier. This methodology has been described as a parent/child relationship, except that it can be many levels deep. If a parent level element is deleted or changed in certain ways it can have unexpected effects on child-level elements. In extreme cases (and sometimes in cases that were not particularly that extreme), the user was forced to totally recreate the model… Some people described designing with Pro/ENGINEER to be more similar to programming than to conventional engineering design.”

Weisberg barely scratches the surface of the issues that can create problems.

In 1991, Dr. Jami Shah wrote an Assessment of Features Technology, for Computer-Aided Design, a journal targeted to people doing research in the field of CAD. He identified that there were problems with features:

“There are no universally applicable methods for checking the validity of features. It is up to the person defining a feature to specify what is valid or invalid for a given feature. Typical checks that need to be done are: compatibility of parent/dependent features, limits on dimension, and inadvertent interference with other features. In a study for CAM-I, Shah et al. enumerated the following types of feature interactions:

  • interaction that makes a feature nonfunctional,
  • non-generic feature(s) obtained from two or more generic ones,
  • feature parameters rendered obsolete,
  • nonstandard topology,
  • feature deleted by subtraction of larger feature,
  • feature deleted by addition of larger feature.
  • open feature becomes closed,
  • inadvertent interactions from modifications.”

The important thing to notice here is that, not only are there multiple failure modes for features, there are also no universal methods for validating features. It’s left up to the user to figure out. And that process, as Weisberg hinted, is much too difficult.

Rebuild Error

Since the early days of Pro/E, a lot of work has been done, both by PTC and other companies in the CAD industry, to improve the reliability and usability of parametric feature-based CAD software. Yet, the problems that Weisberg and Shah identified still exist, and still get in the way of users being able to get the most from their software.

Next: The problem is editing.

 

Filed Under: 3D CAD Package Tips, Autodesk, Creo, Design World, Evan Yares, Featured, Inventor, Pro/Engineer, Siemens PLM, SolidWorks Tagged With: Creo, Inventor, IronCAD, Solid Edge, SolidWorks

Cheetah, Creo, and 2D geometric constraint solvers

June 23, 2012 By Evan Yares 10 Comments

Last week, I wrote, in Solving the CAD concurrency problem, about 2D geometric constraint solvers.

Solvers are one of the major components used in 3D CAD programs, and are the main part of the sketcher used in parametric feature (history based) modelers. They’re also used behind the scenes in direct modeling CAD systems. They’re pretty important, and have a significant effect on a CAD program’s performance.

Cloud Invent, a small software developer, made up—so far as I can tell—mostly of PhD mathematicians, recently posted a couple of interesting videos on YouTube. The first video showed the performance of the sketcher in PTC Creo Parametric 1.0, when dealing with massively large sketches.

The next video they posted was of their “Cheetah” solver, running on an identical sketch.

If you take the time to watch these two videos, you’ll see a couple of important things. First, the Creo Parametric solver seems to fall apart (become unstable) once faced with a large sketch. And the Cheetah solver doesn’t.

I chatted (by email) last week with both the folks from Cloud Invent, and from PTC, to try and understand what I was really seeing. I also duplicated the demo from the videos using Autodesk Inventor, which uses Siemens PLM’s 2D DCM constraint manager.

Lev Kantorovich, from Cloud Invent, responded to my questions.

Q: What are you doing differently in Cheetah that what’s being done with other 2D constraint solvers?

A: The main advantage of our solver is that it has O(n) memory and time requirements (to compare, solver of PTC requires O(n2) amount of memory and O(n3) arithmetic operations to solve a system of constraint equations. The situation is similar with other solvers).

Modern solvers (that of PTC) use general purpose methods of linear algebra. But the system of linear equations that appears in CAD is not “general purpose” – the matrix of such a system is very sparse. We know in advance that each row of this matrix has a fixed number of non-zeros (let’s say, not more than twenty non-zeros). If you can use this information efficiently, you will dramatically improve performance and decrease memory requirements of the solver.

That is exactly what we managed to do in our Cheetah solver. I do apologize that I can’t provide you the detailed information about our algorithm (there is a remarkable mathematic work behind this and five or six PhD dissertations in some of the leading USA universities – in the end of the nineties this issue was in focus of the researchers), but I want to mention one additional advantage of the approach – our methods are well suited for parallel processing.

Q: Are you supporting 3D constraints?

A: So far we tested our solver in 2D sketcher only, but I don’t see any reasons why it shouldn’t work for 3D constraints as well. Actually, this is the most interesting direction.

Traditional parametric CAD lives only in 2D sections – this is part of “parametric feature-based approach” to solid modeling. The reason for this is quite simple – these solvers can resolve only small models. That’s why complicated solid model is divided into hierarchic list of simple features (each one having its own parametric sketch) – known as “history tree”.

But we have a solver that is powerful enough to constrain the whole 3D model (using all reasonable 3D constraints). Now we can try to go away from the feature based approach with its notorious history tree and to unify in one 3D workspace parametric and direct modeling approaches. This is, actually, our main target.

Q: On your website, you say other solvers solve equations in the wrong manner, “using archaic numeric methods” (e.g., Newton iteration, Gauss eliminations and Gram-Schmidt orthogonalization.) That hints that you might be using symbolic methods—or perhaps a hybrid numeric-symbolic method?

A: No, we don’t use symbolic methods.

Our method may be described as:

By using the specifics of the system of linear equations (very sparse matrix), we subdivide the set of all equations into small groups and solve these small subsystems corresponding to these groups. Each subsystem has fixed requirements for memory and computation. The tricky part is how to choose these groups and how to coordinate data exchange between them – we use the iterative approach for that.

Q: You say that you have O(n) memory and computational efficiency. That accounts for number of geometric elements (n), but what about number of constraints (m)?

A: In Cheetah the number of arithmetic operations that is required to resolve system of equations is O(m), i.e., proportional to the number of constraints. It means that if you have millions of geometric entities, but few constraints, the system will be resolved fast.

Q: I’m not clear if you’re solving linear or non-linear equations.

A: We solve non-linear equations, but we do it in the standard way – by linearization on each step of non-linear iterations. Normally, there are very few non-linear steps (two, three, rarely more). Our “know how” is in solving corresponding linear equations.

Q: Can you give me more detail on your support for parallelism?

A: The parallelism is based on what was written above about our method – each small group of equations can be processed independently. The author of this method, Nick Sidorenko, thinks that what we doing is quite new.

Q: You don’t mention anywhere what type of objects you support (e.g. points, lines, circles, arcs, ellipses, planes, cylinders, spheres, NURBS, parametric curves, surfaces), or what type of constraints you support (e.g. coincidence, parallelism, tangency, curvature, etc.)

A: At this moment we have only the prototype. We were focused on proving concept of the algorithm (solving system of the sparse linear equations). We tested it with the 2D sketcher; we haven’t tested yet 3D objects (planes, cylinders, spheres, NURBS, parametric curves, surfaces).

Q: Do you have a Cheetah solver prototype available yet?

A: The prototype will be available soon for download from our site. It works at the moment with line segments and circles only. The set of constraints is also restricted – horizontal, vertical, same point, equal, parallel, perpendicular, tangent (between line and circle). It is also possible to set length of a line segment, distance between two points, radius of a circle, angle between two lines.

Once again, our goal was not a full range parametric sketcher. We used FreeCAD as a test platform for the algorithm. Perhaps, in the nearest future we’ll add more geometric entities (at least, circle arcs, and, may be, ellipses) and more geometric constraints.

Since Cloud Invent was using Creo Parametric as an example of a typical 2D solver, I wanted to get a response from PTC. Brian Thompson answered my questions.

Q: Do your customers seem generally satisfied with the interactive performance of the sketcher?

A: Yes, except when importing some large cosmetic sketches – i.e., something that users don’t want to reference in another feature. For this use case, our cosmetic sketch can be unconstrained, allowing much larger sketches to be imported without solver involvement.

Q: Does the sketcher’s performance have a significant effect on the overall regeneration time for 3D parametric models?

A: No, because highly complex relationships are generally not captured in sketches. They are generally captured at a higher level.

Q: Has your solver fundamentally changed in its underlying design from the early days of Pro/E?

A: Very tough question depending upon how you define “fundamentally”, but yes. Consider that early sketches had to be regenerated, while now we have a constraint-based solver. Then, consider that this constraint-based solver has had significant performance enhancements since its inception.

Q: Do you anticipate any significant improvements in the future, such as using some of the more modern developments for parallel solutions of systems of linear equations?

A: 2D is an area that we will continue to invest – improvements in 2D user experience and performance will be on the table for many releases to come as our 2D strategy on the Creo platform grows and matures.

Julie Blake, of PTC, also responded, when I pointed out the Cloud Invent videos to her:

As you could tell I’m sure, the video is a very academic example, not something used in production. However, the sketching environment overall is something that is important to PTC and is of course fundamental to geometry creation in any CAD application. The Creo team has continuously worked to improve the sketcher performance over the past several years and our general 2D capabilities – regardless of which app they are in – are expected to be a continued area of focus for us over many releases to come.

Julie was right. The Cheetah solver is not a commercial product yet. Though I suspect they have the first 90% of the work done on it, that just means they need to finish the other 90%. (That’s a software developer’s joke.)

As I looked carefully at the Cloud Invent video that showed Creo’s performance, what I saw, reading between the lines, was not instability in the Creo 2D solver. Rather it was a performance issue. With a very large sketch, the interactive performance of Creo got sluggish. If you just move and click your mouse when the system is responding slowly, you’re going to get unpredictable results. This is true with Creo, or with any interactive computer program.

I can’t say that I’m completely pleased with Creo’s ability to handle really large sketches. Yet, I’m not the one using the program. If Creo users are generally happy with its performance in this realm, then it’s good enough by my book. Creo’s support for unconstrained cosmetic sketches provides a reasonable solution for very large sketches. (If you’re a Creo user, and have any thoughts on this, please add a comment below.)

I find what Cloud Invent is doing to be quite interesting, and I’m hopeful that they’ll be able to get their product to point where it’s commercially viable. I suspect, though, that their best path to market may through working with (or being acquired by) a major CAD or components company. Preferably one with a whole bunch of users who’ll benefit from having access to Cloud Invent’s technology.

Filed Under: Creo, Evan Yares, Featured, News Tagged With: 2D DCM, Autodesk, Cloud-Invent, Creo, D-Cubed, Inventor, PTC, Siemens PLM, Sketcher, Solver

Solving the CAD concurrency problem

June 15, 2012 By Evan Yares 12 Comments

Earlier this week, I was doing some software testing on my lab machine. It’s a really nice Z1, on loan to me from HP. It has an 8-core high-end Intel processor. I brought up the process monitor as I worked, and watched, somewhat amused, as Autodesk Inventor pegged one core at 100% for several minutes, while the other cores sat there, doing almost nothing.

Really Big Sketch
This is a really big sketch. CAD programs don't like these.

It wasn’t Inventor’s fault. Well, not really. The particular test I was doing was designed to push the 2D sketcher in Inventor to its limit. It contained 1024 triangles, connected with over 3072 constraints (I didn’t count exactly.) That sketcher uses a component called 2D DCM (Dimensional Constraint Manager), part of the D-Cubed group of software components, developed and sold by Siemens PLM Software.

Many well-known CAD programs use D-Cubed software. It’s the stuff that, when you push and pull on a sketch (or a CAD model) figures out what you’re trying to do, and calculates the resulting shape. 2D DCM is often called a “constraint manager,” or a “solver.” Built into its heart are a bunch of very complicated algorithms for solving systems of linear equations. It’s PhD level stuff.

In the case of my testing, it was 2D DCM that used all the power of one core, but ignored the other cores in my computer – essentially, leaving 7/8 of the power that HP built into the computer untapped.

So, here’s the question: Why doesn’t Siemens PLM just tell their programmers to fix 2D DCM, so it can use multiple cores? Why not rewrite it to support concurrency? If they did that, it’d solve a lot of other problems at the same time—for example, it would make creating cloud based CAD systems, that run across multiple processors and servers, a lot easier to implement.

As a start, 2D DCM has been thread-safe since 2009. A CAD system can run multiple instances of the program on parallel processors, without any significant performance hit.

So it does run on multiple cores. Problem solved?

Hardly.

In my test, I’d created a 2D sketch in Inventor. where moving any one node or edge required the system to recalculate all of the lines in all of the triangles.  All 1024 of them.  There were no independent constraints, where making a change would not affect other geometry. They were all interlinked.

Suppose Autodesk’s programmers had set up Inventor to use multiple instances of 2D DCM on multiple cores. How could a problem such as mine be partitioned to use those multiple instances?

The answer is: it couldn’t. Running 2D DCM on multiple cores allows those multiple instances to solve independent constraints. Not interlinked constraints.

Let me see if I can paint a picture of the problem. When I was a kid, I used to play a game called pick-up sticks. The idea was to dump out a bunch of long sticks on the floor (or table), creating a tangled pile. Each player, in turn, would remove a stick from the pile without disturbing the remaining ones.

Pick-up Sticks

Imagine several people playing pick-up sticks, but instead of waiting in turn, all of them trying to remove sticks at the same time. Concurrently. That’s pretty analogous to the problem of partitioning the data in a sketch in a way that it’s possible to use parallel solvers. There’s no easy way of partitioning the equations representing the system of constraints in such a way that they can be solved in parallel.

2D DCM has been around for quite a long time, as CAD component software goes. When it was designed, the programmers likely looked at the issue of parallel computing, shuddered, and decided to focus on making the software actually work right in the first place. It probably made sense at the time: Multicore processors, and even parallel computers, were rare.

Over the years, some things have changed. Multicore, parallel processors, clusters, and cloud computing are now commonplace. And there have been advances in math. Do a search on Google Scholar for “parallel solutions of linear systems” and you’ll get a lot of results. Still, adding parallel support into a tool like 2D DCM isn’t just a matter of writing some lines of code. It might involve tearing it down to the ground, and rebuilding it with a completely new architecture.

Is Siemens willing to invest what would likely be a princely sum in rebuilding their D-Cubed products from the ground up? I can’t answer that question. If I asked the folks at Siemens, and they told me, I’d not be able to tell anyone else. Trade secrets, you know. But I can say that I hope they are looking at this problem, because it’s one of the key limitations that get in the way of developing next-generation high-performance CAD software.  The kind that can run on multiple cores, multiple processors, clusters, or the cloud.

I wrote this post in response to a commenter, who was raising the issue of the lack of multicore support in current CAD systems. I think his concern is valid, but I wanted to make the point that this is not a simple problem to fix, whether in geometric constraint managers, or geometric modeling kernels. It’s like the pick-up sticks problem: Really difficult, even if you throw big piles full of money at it, and wave fat paychecks at PhD mathematicians.

Still, there are people working on these problems. Next week, I’ll be writing a bit about Cloud Invent, a tiny company that may have made a breakthrough in geometric constraint modeling.

 

Pick-up sticks image courtesy David Namaksy, boardgamegeek.com

Filed Under: Evan Yares, Featured, Inventor, News, Siemens PLM Tagged With: Autodesk, cloud, Cloud-Invent, Concurrency, D-Cubed, DCM, Inventor, Multicore, Siemens PLM

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

3D CAD NEWSLETTERS

MakePartsFast

Footer

3D CAD World logo

DESIGN WORLD NETWORK

Design World Online
The Robot Report
Coupling Tips
Motion Control Tips
Linear Motion Tips
Bearing Tips

3D CAD WORLD

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
Follow us on Twitter Add us on Facebook Add us on LinkedIn Add us on Instagram Add us on YouTube

3D CAD World - Copyright © 2022 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy