xxxxxxx. By xxxxxx
xxxxx
xxxx
How technology is utilized is an important part of any application. So where does AI fit into all of this? By Jon W. Mooney
AI Takes on
Architectural Acoustics
I feel lucky to have an acoustics career that has spanned most of the “computer age.” In that time, our architectural acoustics problems have remained the same, but our computer design tools have continued to get better.
Starting with the shear elegance of physically wiring the patch bay of an analog computer, to the big “ka-chunk” sounds when typing on a massive steam-punk looking key-punch card machine; from listening to my dumb 9,000-baud terminal dial into a liquid-cooled Cray supercomputer; to mounting heavy 1 inch tape drives and administrating room-size company main-frame computers; to moving individual bits in the ever-shrinking self-contained microcomputer, and now back to the dumb smart phone dialed into cloud-based acoustics software-as-service apps.
With the introduction of each new computer design tool, there was the promise of reduced effort, faster solutions and more free time, but for me the results always seemed to be more work to do and with greater competition. It’s with that background that I judge the current promises and concerns for artificial intelligence and specifically, its use in acoustics design.
Spring Into Action
At the recent spring conference of the Acoustical Society of America, held this year in Chicago, there were no less than 80 papers presented on the current use of artificial intelligence and machine learning in acoustics with five of those papers specifically on its use in architectural acoustics design. By comparison, only three papers related to AI&ML were presented at the spring conference of the Institute of Noise Control Engineering, held one week later in Grand Rapids, Mich. Since INCE members tend to be acoustic practitioners while ASA members tend to be acoustics researchers, artificial intelligence doesn’t seem to be poised to take over our architectural acoustics industry just yet.
Any reader that hasn’t played around with one of the new online AI platforms should try it out for themselves. Although there are a few AIs that can generate visual art, the easiest ones to start with are the interactive language models such as ChatGPT developed by openai.com. Text your question or request as you normally would to a person, and the AI texts a reply right back to you. It is quite fun to see just how fast and smart the AI seems to be. Its answers appear to be well-researched, thoughtful and authoritative, but all of that is just an illusion. AIs do not think, research, create, nor question what they do or do not know. They are simply dumb combinations of filters and templates, which generate output which matches your input text. AIs sound well-researched because the AI filters were formed from massive amounts of publicly available writings. AIs sound thoughtful because their response templates (like a page of Mad Libs) were designed to sound that way. AIs sound authoritative simply because they are incapable of original thought (so that’s the secret).
Realizing these limitations, AI&ML is a useful design tool that is presently being used in acoustics research, primarily for its abilities to quickly filter and sort through complex data. If you’ve ever been asked to select all the pictures containing cars (or whatever), then you’ve helped to form an AI filter. AI filters are formed using thousands of pictures of cars, or numbers or words, or anything else that can be digitized. Once complete, the filter will identify anything similar as a “car.” AI developers call this process “supervised training”, but they are really just homing in on a bunch of filter coefficients; nothing is actually being trained.
Learning a New Tool
Using a similar approach, researchers at Bethany Lutheran College in Minnesota and the University of Tennessee in Chattanooga recently trained artificial intelligence on the constructions and performances of various sound insulation. They then used that AI to predict the performance of new untested sound insulation designs. Researchers at Belkent University in Turkey have trained an AI filter with the sounds of various musical instruments along with the audience’s emotional reactions to each instrument.
The AI is then used to predict people’s emotional responses to new soundscape designs, for example for a work environment. Engineers in Belgium have trained AI filters to predict the characteristics of traffic noise.
Last year, institutions from throughout the globe collaborated to train artificial intelligence to both identify random sources of noise and then to determine how annoying those particular sounds are. Researchers at MIT have trained an AI filter to predict a room’s geometry by listening to the sound of a balloon popping within the room. Architectural acoustics students at Penn State University have trained an AI filter with data from various concert halls. They are then using the artificial intelligence to predict the reverberation of new concert hall concepts.
While the acoustic researchers are principally exploring the use of artificial intelligence for complex sorting and identification work, acoustic consultants, engineers and architects are giving AI mixed reviews. As an independent acoustics consultant, I’ve found the new language generating AIs to be useful for bouncing new ideas around. These new AI systems have been designed to quickly “teach” themselves by scanning the internet and other open libraries. I think that conversing with an AI is more productive than conducting an internet search, but it’s not quite at the level of an Ironman/Tony Stark/Jarvis relationship.
Trained in AI
Since most AI has been trained using openly available sources, it tends to repeat commonly held misconceptions and outright disinformation on technical subjects such as architectural acoustics. In some conversations, it will also just make things up (AI practitioners call this “hallucination”). In this regard, it is more like conversing with that one friend from college who has read a lot of stuff on the internet but has no practical experience or expert knowledge, and has a tendency towards BS. Its answers will normally be biased toward popular opinion, but not necessarily correct.
Other acoustic consultants, engineers and architects who have attempted to use generative AI for their consulting work have come to the same conclusion and periodically share some of AI’s amusingly incorrect responses to their design questions. Although recent news reports warn of the potential of AI to make some jobs obsolete, AI’s significant limitations along with its current limited use by the profession, suggest that jobs within the architectural, engineering and construction sector are not in any immediate danger.
Of course, there is another type of AI that has been a problem in our industry for the past hundred or so years. As far back as the late 1800’s, the phrase “artificial intelligence” originally was a derogatory term referring to someone who had read a lot but had no practical experience or expert knowledge, and had a tendency towards BS (that sounds familiar).
Throughout my career, I’ve been brought in to correct more than a few projects which had previously been the victims of bad acoustics advice from such “AI-humans.” Driven by a desire to be competitive with the least amount of effort and scruples, we are now seeing a surge in computer AI generated content being plagiarized as original work. The number of students using AI to generate their essay assignments has grown so fast, that there is now a large market for software that schools can use to detect AI-generated student essays. In a more serious infraction, a lawyer recently submitted a legal brief to a federal judge which included the details of several cases for precedent. Upon review, the judge found that none of the six cases were real, all were hallucinations made up by the AI the lawyer had been using, including the minute details of each case. That law firm is presently facing possible sanctions.
As with each previous new computer design tool of the past, AI holds the promise of reduced effort, faster solutions and more free time while creating more work to do and increasing competition. In the few months that I’ve been using transformer-based AI, it has reduced the effort and helped speed up solutions for tasks such as developing appropriate acoustics requirements for a project, weighing the pros and cons of various acoustic design options and planning for acoustic testing. On the other hand, in each of those cases I had more work to do reviewing, confirming or discounting the AI generated content. As an experienced reviewer of engineering books and papers, as well as an editor of a peer-reviewed engineering journal, I’m used to and enjoy this type of technical review and editing. Others may find the necessary additional work to be too much work for AI’s benefits.
But AI isn’t just for work. In my free time, I’ve been conversing with AI to explore important ideas like new tunings for my guitar, writing song lyrics, and working on my other hobbies.
Even if AI is not quite ready to take over the architectural acoustics industry, it is fun to play around with.
References
- xxxxxxxxxxx
- xxxxxxxxxxx
Images courtesy of Acoustics by JW Mooney LLC.
Jon W. Mooney PE is the principal consultant for Acoustics by JW Mooney LLC, a small town Iowa-based firm providing state-of-the-art acoustics, vibration and systems engineering consulting for projects throughout the Midwest. Email: acoustics@jwmooney.com.