Voice and haptic feedback could ease CAD complexity speed design but is slow-to-market due to that very CAD complexity.
By Jean Thilmany, Senior Editor
“Computer, draw me a circle.”
You won’t be saying that to your computer-aided design system anytime soon. Even as Alexa and other speech-recognition systems have become ubiquitous over the past decade, voice-controlled CAD remains elusive.
Developers say design software that responds to verbal commands could cut the learning curve, make it easier to work with a system, and slash design time.
Perhaps it’s no surprise that voice-controlled CAD isn’t here. CAD is vastly more complex than the speech recognition tools we use today. Asking Alexa to turn up the thermostat or dictating a text message is very different than verbally controlling geometries, parts, and mechanical forces on a screen.
When will voice-controlled CAD be commonly available? It’s difficult to know.
Yet CAD systems that “hear” and follow commands could allow design teams to zoom in on the specifics of a CAD model and to make changes during a meeting. Designers could quickly add, remove, or update information in design databases and could quickly make routine requests, like opening a screen or drawing a circle. Down the line, they may be able to do away with keyboard and mouse and to design a model via voice command.
The concept of voice-controlled CAD is not new, but getting there is difficult.
In 2009, researchers at the University of Hong Kong proposed a method for voice-controlled CAD. A decade later, scientists at Purdue University and at two Spanish universities set out a method for using voice to capture design intent and annotation. Throughout the years, other systems have been proposed but they remain expensive to realize and implement.
Meanwhile, designers still rely on their keyboard and mouse.
Is voice practical?
An AutoCAD LT user echoed this frustration, asking in mid-2020 in the software’s community forum why voice-command wasn’t a feature on the CAD program.
“Voice command could result in titanic time savings. Consider the chains of actions that could be bypassed by a single voice command,” the user wrote. “Just stating a sketch sometimes requires moving your cursor to change from the manufacturing environment to design.
Why not just double mouse click and say ‘concentric circle’ or whatever your first sketch move is to be?”
Answers varied, several suggesting the user learn keyboard shortcuts and write macros to speed drafting time. Some point out the time spent voicing the words “concentric circle” could quickly be spent hitting a key for a macro to make the circle.
Others suggested training already available commercial voice-recognition technology to open the macros.
“Voice commands regarding drafting are just not practical,” one community member responded. “The only way I see voice commands being used is ‘Alexa, make a PDF and send this drawing to Gavin with subject: project voice.’”
Moving beyond today’s speech recognition systems isn’t yet practical, but would be necessary for CAD’s complexity, says Natalie Hutchins, an engineer and writer at IndiaCAD, which provides outsourcing services.
Hutchins created a table that compared the features of the voice-recognition programs from Nuance, Microsoft, and Google. None of the three could interpret spoken words in the correct context with complete accuracy, she found.
Not too long ago—though a lifetime ago by technology standards, so about 30 years—the mouse and the graphical display were huge engineering design breakthroughs. CAD came along at the same time as computer graphics programs. Both technologies allowed shapes to be depicted on the computer screen that had been dominated until then by blinking letters and numbers.
For the first time, engineers could depict images in on-screen and make quick changes to the dimensions and shapes when needed.
CAD advancements have continued apace. Three-dimensional CAD became commonplace. Analysis software is now tied too CAD so engineers can immediately analyze their designs and make changes where needed.
Ironically, continued CAD updates keep the systems from being compatible with voice technology.
Today’s designers often browse among hundreds of CAD icons and menu scripts and switch between various command panels in order to do a modeling task, write the University of Hong Kong researchers. Ascribing voice commands to each or these actions is impossible and would make the designer’s life harder, not easier.
The researchers’ paper, “Natural Voice-Enabled CAD: Modeling Via Natural Discourse” appeared in the January 2009 edition of the journal Computer-Aided Design and Applications. Sukui Xue was lead author; the paper was his mechanical engineering postdoctoral thesis.
While voice-driven CAD commands would be of “tremendous benefit,” the technical challenge of creating and implementing the technology means it likely won’t be available in the near future, Hutchins says.
Speaking in CAD talk
That hasn’t held CAD companies back from trying.
The CADmaker think3 met with some success with a 2000 software update that included a speech-enabled graphical user interface, which allowed the user to issue commands without scrolling through icons and pull-down menu trees. This reduced the clutter of dialog boxes, saved time, and increased productivity, according to the company’s marketing materials of the time.
Voice input provides designers with a third option, along with the mouse and the keyboard, for entering commands or numerical inputs. The software was able to recognize several hundred voice commands, including basics like draw, zoom, redraw, fit view, and line. The software also recognized numerical values.
The feature used Microsoft’s Speech Application Programming Interface version 5.0, an interface for third-party application developers, according to think3.
But as it added functionality to the new release, think3 had to steer a careful path between complex CAD and ease-of-use, since the company cites simplicity as a major selling point, the University of Hong Kong researchers say.
The California CADmaker closed its doors in 2011, though the move had nothing to do with its system’s voice-recognition capabilities.
In 2005, Enact Technologies introduced Speak4CAD, compatible with AutoCAD software. During beta-testing, the software doubled CAD productivity, as measured by comparing manual drawings to those created by spoken drawing commands and dimensions, said Bruce Swan at the time. He was Enact Technology senior partner at the time.
The technology was specifically written for AutoCAD commands to make it faster than standard speech-recognition software that must search for terms, Enact wrote in its marketing materials at the time. The user would dictate commands and numbers as they move the mouse, to eliminate the use of the keyboard.
Enact Technologies is no longer around. It’s not clear whether the business folded or was purchased by another company.
The think3 and Speak4CAD systems relied on predefined, targeted words and phrases, which allowed users to use relatively complicated expressions such as “view from left” and “add a circle,” Xue writes.
“However, this method still restricts the user’s expressive style by all of pre-defined rules,” he and his teammates write in the paper. Users must also remember all the fixed words and expressions.
“This impedes the freedom that might have been brought by speech, because too many restrictions have been added to the users’ expressions,” the researchers say.
They put forward a verb-based semantic search approach that would extract useful information from voice-issued sentence commands. Users would say: “draw me a circle that has a radius of 2.5 inches.” Rather than “circle; radius; 2.5 inches.”
“Natural voice-enabled CAD frees CAD users from the buttons and menu by allowing natural discourse as the input. Natural discourse is also less restricted than the previous voice-based systems,” the researchers state in their paper.
Despite these merits, their system has limitations, they acknowledge. Because it doesn’t eliminate the mouse, those with paralysis or other types of disabilities can’t use it. Also, it should recognize more natural phrasing and should be able to be used without training, the researchers say.
Their proposed system isn’t yet included within commercially available CAD software.
Annotation while speaking
In the face of technical limitations of using voice for design, some researchers are looking at voice-driven 3-D annotation to aid collaborative design, as voice-annotation may be easier to develop and implement than voice-driven design.
Annotation enables the exchange of design intent and rationale with other users directly through the 3-D model, says a research team of mechanical and construction engineering professors from Purdue University in Lafayette, In., and from Jaume I University and the Valencia Polytechnic University in Spain.
Their paper, “A voice-based annotation system for computer-aided design” appeared in the April 2021 edition of the Journal of Computational Design and Engineering.
Much of the information generated during the product design process is unstructured, writes Raquel Plumed, lead author and mechanical engineering professor at Jaume I University. That is, much of the design information is exchanged verbally and isn’t captured within the CAD system.
This information takes the form of facts, suggestions, informal conversations, discussions, and opinions.
Such information—communicated during informal conversations or even formal meetings—can be critically important for data integration, collaboration, process efficiency, productivity, and error reduction, she and her colleagues write.
“But the knowledge is often not captured or archived for future use because the process is time consuming, inefficient, and not cost efficient,” they say.
The researchers give the example of design rationale, which aims to capture information about the reasoning, motivation, and justification for design decisions and to describe their relation to other decisions.
Much of this might take place during a quiet conversation between two engineers. But it’s time-consuming to write down and store verbal conversations and then to find them again.
“Furthermore, engineers and designers often used vague expressions in their verbalizations of a problem or a design approach, particularly during the early stages of the design process, which makes it difficult to establish semantics in CAD models,” they say.
They put forward a voice-based software to annotate 3-D models directly within CAD software.
Their method automatically captures audio signals and transcribes them to a 3-D note, which is attached to the geometry in the right spot and is available to other product information and business processes across the enterprise, such as a product management system.
These researchers join others in describing how CAD systems could best incorporate voice commands. Still, voice-activated CAD remains out of reach, likely due to cost and complexity.
Or, as one user put it in the AutoCAD LT forum: “Unless commercial CAD systems adopt voice technology, we won’t be seeing it anytime soon.”