BIM - Engineering.com https://www.engineering.com/category/technology/bim/ Wed, 08 Jan 2025 15:40:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png BIM - Engineering.com https://www.engineering.com/category/technology/bim/ 32 32 How do LiDAR, laser scanning, photogrammetry and GNSS compare for capturing AEC details? https://www.engineering.com/how-do-lidar-laser-scanning-photogrammetry-and-gnss-compare-for-capturing-aec-details/ Wed, 08 Jan 2025 15:33:32 +0000 https://www.engineering.com/?p=135354 Here's an overview of spatial imaging technologies AEC engineers use to capture and collect data.

The post How do LiDAR, laser scanning, photogrammetry and GNSS compare for capturing AEC details? appeared first on Engineering.com.

]]>
In most architecture, engineering and construction (AEC) projects, one of the first key tasks is to gather data on existing conditions. This might start with collecting historical data from available records, followed by some type of project-specific survey to more accurately map site topography and existing facilities. Additional surveys are typically required during construction and after completion to establish as-built conditions.

For years, AEC teams relied on conventional tools such as tape measures, transits, theodolites and levels to collect data, build base maps and document construction projects. As new technologies were developed, AEC professionals gained several options for data collection, along with improvements to conventional technology. Let’s take a look at four common technologies used to collect AEC data: LiDAR (light detection and ranging, or sometimes laser imaging, detection and ranging), laser scanning, photogrammetry and GNSS (global navigation satellite system).

LiDAR and laser scanning

LiDAR and laser scanning are similar technologies, with some subtle differences. Both rely on laser technology, which gained widespread use among AEC professionals in the 1980s and 90s, primarily for measuring distances and establishing alignments and level surfaces. By directing a laser beam to an object and measuring the time for the reflected beam to return to the receiver, laser-based tools enabled users to measure distances accurately with the push of a button. And since laser beams do not disperse appreciably, they proved highly effective for establishing alignments and level planes.

More recently, LiDAR has been used to capture large datasets by targeting an object or a surface with a laser and taking multiple measurements encompassing the area of interest. In conjunction with geolocated control points, the measurements can be used to establish coordinates at each point of measurement.

LiDAR systems may be ground-based or mounted on aircraft, such as drones, also known as uncrewed aerial vehicles (UAVs). Equipped with a laser scanner, along with GNSS equipment and an inertial navigation system, airborne LiDAR is often used to create 3D models of ground surfaces over widespread areas. Airborne systems can also be equipped with high-resolution cameras to capture imagery.

LiDAR data can be used to generate base maps for large areas. (Image source: Adobe Stock.)

Laser scanning, which also uses controlled deflection of laser beams to capture or establish surface shapes, is often used to build 3D models of buildings, mechanical systems and other specific objects. It is typically ground-based. Laser scanning is also used in 3D printers to build physical objects based on coordinate data.

Both LiDAR and laser scanning typically produce point-cloud images, which consist of numerous 3D points that can be used to depict objects in computer-aided design (CAD) and building information modeling (BIM) systems. Point clouds often need manipulation to be converted to surface models or aligned with other 3D models or point clouds. Because of the large quantities of data generated by point clouds, the resulting datasets may also need to be “thinned” or downsized for practicality in CAD or BIM models. Software utilities and artificial intelligence (AI) can help with this process.

Photogrammetry

Photogrammetry has been used for mapping purposes since the early 1900s. While multiple types of photogrammetry have been employed, the most common AEC applications have used aerial photography and stereoplotters to analyze two or more photographic images taken from different positions. Using this information, photogrammetrists can determine 3D coordinates of select points and plot contour lines to create topographic maps.

Aerial photogrammetry uses two or more photographic images taken from different positions to determine coordinates of select points and develop topographic maps. (Image source: Adobe Stock.)

With the development of LiDAR and other technologies, photogrammetry has also been used in conjunction with these technologies to produce a wide variety of maps and datasets. For example, since photogrammetry is generally considered more accurate in the X and Y directions (horizontal coordinates), while LiDAR is generally more accurate in the Z direction (vertical), the two technologies can be combined. By georeferencing aerial photographs and LiDAR data in the same coordinate system, 3D visualizations can be created with optimal accuracy and contain a wealth of data.

GNSS

A GNSS uses satellite data to provide positioning, navigation and timing (PNT) services on a global or regional basis. The U.S.-operated global positioning system (GPS) is one of many GNSSs in the world.

GNSS uses satellite data to provide PNT services on a global or regional basis. (Image source: Adobe Stock.)

The U.S. Department of Defense initiated the U.S. GPS program in the 1970s. The full constellation of 24 satellites became operational in 1993. Initially, the accuracy of civilian GPS data was limited by a deliberate error introduced into the GPS data so that only military receivers could access the maximum accuracy. This limitation was removed in 2000. In the AEC world, most GNSS-based devices still combine satellite information with terrestrial-based corrections or augmentations to compensate for various imperfections and improve accuracy. 

With a robust network of satellites, GNSS data can be captured by numerous devices, including smartphones, tablets and other consumer products. For professional AEC use, more sophisticated GNSS receivers are used to capture 3D point data more accurately. These devices can be manually positioned or mounted on vehicles for mobile use.

GNSS receivers can be manually positioned to collect data at specific points. (Image source: Adobe Stock.)

In addition to providing a convenient way to collect survey data, engineers use GNSS in many other AEC applications, such as automated construction layout, real-time guidance of construction equipment (machine control), tracking construction equipment and materials, monitoring worker safety and performance, and capturing project progress.

Selecting a method

With numerous data collection choices available, selecting the best method for any given project might seem like a daunting process. In addition to the individual methods described previously, sometimes multiple methods can be used together to achieve the desired results. And for small projects, sometimes conventional tools might still provide the most practical solution.

While there are no hard and fast rules for selecting the best method or methods, proper consideration of key factors and input from experienced professionals can simplify the process. Key factors to consider include:

  • Level of accuracy — If the data will be used for final design and modeling purposes, greater accuracy will be required than if the data will only be used for planning purposes.
  • Area of coverage — For large areas with high point-density requirements, airborne LiDAR might provide the best results. For smaller footprints with intricate facilities, ground-based laser scanning might be a better choice. Photogrammetry and GNSS can also be considered for projects of various sizes, either individually or in conjunction with one of the other projects.
  • Availability of existing data — If a project owner already has data of the appropriate level of accuracy and in the vicinity of the project, but just needs additional coverage, sometimes sticking with the previous data collection method makes the most sense.
  • Availability of services — Whether or not the project team has ready access to the various methods can play a part in the decision.
  • Budget — Like it or not, sometimes cost plays a key role in selecting a method.

The decision-making team may need to be a multi-discipline group, considering planning, design, construction, and operational needs. An experienced geospatial professional should also be part of the decision-making process.

The post How do LiDAR, laser scanning, photogrammetry and GNSS compare for capturing AEC details? appeared first on Engineering.com.

]]>
Bentley’s Year in Infrastructure 2024: The AI paradigm shift https://www.engineering.com/bentleys-year-in-infrastructure-2024-the-ai-paradigm-shift/ Mon, 16 Dec 2024 16:42:01 +0000 https://www.engineering.com/?p=134929 The annual event highlighted AI’s potential to meet existing and future infrastructure needs around the world.

The post Bentley’s Year in Infrastructure 2024: The AI paradigm shift appeared first on Engineering.com.

]]>
The infrastructure sector is seeing increasing demand and data—but a critical shortage of skilled engineers to meet that demand, and a lack of insights derived from that data. Fortunately, a transformative new tool exists that could bridge those gaps: artificial intelligence (AI).

That was the key takeaway from of Bentley Systems’ Year in Infrastructure 2024, the annual conference from the software developer focused on infrastructure design, construction and operations.

“AI is a new paradigm shift, transforming every industry, and infrastructure is no exception,” said Nicholas Cumins, CEO of Bentley Systems, in his keynote address at Year in Infrastructure, which took place in October in Vancouver, Canada.

If you missed the conference, here’s a recap of Bentley’s views on AI and other infrastructure engineering trends.

How AI can impact infrastructure

“Just imagine the sheer scale of data that is created in the design, the construction and the operations phase,” Cumins said. “It makes infrastructure a prime area where AI can have the greatest impact.”

AI-driven insights can enable infrastructure asset operators to predict when maintenance is needed before failures occur. AI can analyze digital twins of infrastructure assets such as bridges, energy transmission networks, roads and dams. It can identify issues and recommend preventative action, avoiding breakdowns and safety actions, and even reduce their carbon footprint.

But there’s a caveat, according to Cumins. “The reality is, that in order to take advantage of AI and all the innovations, you need to get control of your data,” he said. Bentley believes it has the offerings that will enable infrastructure stakeholders to make the most out of this emerging technology.

Advancements in digital twin technology

Bentley anticipates that digital twins will be crucial for AI to enable smarter and more connected infrastructure. To that end, the developer has enhanced its iTwin platform with new features to integrate real-time data and improve communications between design models and operational data. They believe this will enable infrastructure professionals to better predict performance, optimize maintenance schedules, and make their asset management strategies more robust.

At Year in Infrastructure 2024, Bentley demonstrated how AI could be integrated into a digital twin through its new product, OpenSite+. A Bentley executive used the software’s generative AI copilot to design a hotel, validate the design, check the geospatial context of where the hotel was to be located, and make real-time changes to the design—all by simply talking to it. OpenSite+ also uses AI to automate the drawing process for a project.

Bentley’s OpenSite+ uses an AI copilot to help designers create detailed infrastructure designs in real time. (Image: Bentley Systems.)

Bentley also says it has enhanced its MicroStation 2024 software to help designers create digital twins as a natural part of their design work, through features including Python scripting support, integrating GIS data into the design, and enabling real-time collaboration on digital twins.

Bentley has also been working on incorporating 3D geospatial data into its digital twin platform. The company recently acquired Cesium, a 3D geospatial platform company whose 3D Tiles standard has been adopted as the Open Geospatial Consortium community standard. The combination of iTwin and Cesium technologies enable an infrastructure asset owner to, for example, collect drone photos, build a reality model from them in iTwin, run the model through AI analytics to detect cracks, process the analytic data through a 3D tiling pipeline into 3D Tiles format, and disseminate the files into any of Cesium’s runtimes.

Bentley is also partnering with Google, which has adopted the 3D Tiles standard in Google Maps, to incorporate photorealistic 3D tiles and geospatial data from 2,500 cities in 49 countries into a Cesium-powered data ecosystem compatible with real-time 3D engines such as Cesium JS, Unreal, Unity and Nvidia Omniverse.

With these partnerships, Bentley aims to facilitate the use of AI in digital twin environments to create better designs, adjust them in real-time, and by using geospatial data, ensure the right decisions can be made to optimize an infrastructure asset at any point in its lifecycle.

The case for open infrastructure data

Another highlight of Year in Infrastructure 2024 was a call-to-action to make data more open and accessible, which Bentley sees as crucial to unlocking the full potential of AI.

“The infrastructure world is complex and, frankly, it’s often disconnected,” said Mike Campbell, chief product officer at Bentley, during his presentation at the conference. “And to make our existing systems more resilient and adaptable to population growth and climate change, we need to connect people with data.”

Mike Campbell, chief product officer at Bentley, presented Bentley’s vision for open data at Year in Infrastructure 2024. (Image: Bentley Systems.)

Complex infrastructure projects often involve multiple organizations, multiple teams, multiple engineering disciplines and multiple stakeholders working together for a long time. This complexity makes it impossible to rely on any one single system or single vendor.

Instead, infrastructure projects need an ecosystem where data is flexible, interoperable and easy to integrate across different tools and platforms. Bentley says its open applications are designed with this in mind, enabling users to edit models from other vendors and other software products while enabling collaboration across teams.

“A road, bridge or dam could be in operation for 50 years or more, undergoing repairs, upgrades and expansions,” said Campbell. “During this time the software and platform used to manage the asset will evolve.”

By ensuring that the data is open, asset owners and operators are able to adopt new technologies and innovations, while still being able to rely on their own historical data. Bentley encourages the industry to adopt its open source data schema for infrastructure so the sector does not have to keep starting from scratch with their data.

No single vendor can tackle the task alone, which is why open, flexible data systems are a better alternative over the long run. “The future of infrastructure engineering is open, it’s flexible, collaborative and built on a foundation of data that you can share securely,” said Cumins.

Sustainability in infrastructure

Another central theme at Year of Infrastructure 2024 was sustainability. There is increasing demand on existing and future infrastructure to be able to handle population growth while being resilient to—and perhaps even mitigating—climate change. Infrastructure designers, builders and operators also have to account for increased regulatory pressures such as the carbon accounting measures introduced in the U.S. and Europe, which can add time and cost to their projects.

However, the wide variety of methods and tools to calculate embodied carbon means that data is often not transparent, presenting a challenge to designers. Another challenge is the time required to calculate embodied carbon, which can be lengthy when the data needs to go through rigorous verification and data cleansing before it can be used. Because of these factors, accurate carbon data is often not available until late in the design phase—resulting in lost opportunities to reduce carbon in the design.

Bentley believes that the digital twin is ideally suited to meet those challenges. The technology can incorporate data and calculate the trade-offs between economic, environmental and social outcomes related to their projects, improving decision making at all stages of a project.

Bentley unveiled a new Carbon Analysis tool for the iTwin platform that they say can rapidly compute embodied carbon to help engineers minimize carbon and understand the required trade-offs. Continuous calculations during the design phase enable users to generate accurate carbon reports much earlier in the life of the project, and any updates to the design model can instantly show the updated carbon footprint of the project.

Visualization of an airport’s embodied carbon generated by the Carbon Analysis tool. (Image: Bentley Systems.)

The Carbon Analysis tool supports over 30 mainstream design file formats within iTwin and integrates with external lifecycle assessment tools, making it easier to report carbon data to project stakeholders and designers. In turn, it’s easier to explore alternative designs, materials or construction methods. By enabling small adjustments early on and throughout the design process, infrastructure projects can reduce their carbon footprint.

Year in Infrastructure 2024 set out Bentley’s vision of the infrastructure sector in the coming decades, with a particular focus on the AI-powered solutions the company believes will help designers, builders and operators meet the challenges and opportunities that are ahead.

“Let’s use AI, our generation’s paradigm shift, to improve outcomes for the built and natural environment,” said Cumins.

The post Bentley’s Year in Infrastructure 2024: The AI paradigm shift appeared first on Engineering.com.

]]>
The comeback of Notre-Dame https://www.engineering.com/the-comeback-of-notre-dame/ Mon, 09 Dec 2024 15:42:50 +0000 https://www.engineering.com/?p=134722 Engineering tools designed for modern buildings helped bring the storied cathedral back to life—and could help preserve other cultural landmarks.

The post The comeback of Notre-Dame appeared first on Engineering.com.

]]>
On December 8, 2024, the Notre-Dame de Paris cathedral officially reopened to the public, a little over five years after a fire ravaged the iconic structure. When the church’s spire collapsed and the lead-lined wood roof melted away, the world, and especially the French people, responded. French President Emmanuel Macron vowed that Notre-Dame would be restored, and fast.

Over the five years since the fire, hundreds of millions of euros have been spent and around 250 companies and hundreds of experts have worked to bring the Paris icon back to life. Although many of those that lent their expertise to the restoration were experts in traditional craftsmanship—carpenters, roofers, art restorers and so on—engineers and digital technology served a critical role as well.

Autodesk’s BIM model of Notre-Dame. (Image: The Public Establishment “Rebâtir Notre-Dame de Paris” and Art Graphique & Patrimoine.)

One company that has lent their software, workforce and skills to the restoration is Autodesk. Back in 2021 Engineering.com spoke with Autodesk about how they planned to create a building information modeling (BIM) model of the cathedral with Autodesk Revit. That model ended up being used in more ways than anticipated, and even resulted in changes to Revit itself.

Creating the BIM model of Notre-Dame

Putting together a BIM model of a structure with the complexity of Notre-Dame was only possible because 3D scans of the cathedral had been made prior to the fire.

“It was crucial for rebuilding,” Nicolas Mangon, VP of AEC industry strategy at Autodesk, told Engineering.com. “They decided to rebuild as it was before. So if there was no scan, there were no drawings, there was nothing.”

The France-born Mangon led the restoration project for Autodesk. Working with contractors and a core team of around 15 Autodesk employees, the team focused on using their BIM technology to meet the ambitious deadlines that had been set out for restoration.

Cross section of the BIM model of Notre-Dame. (Image: The Public Establishment “Rebâtir Notre-Dame de Paris” and Art Graphique & Patrimoine.)

It was not easy to develop a complex BIM model like that for Notre-Dame, but the Autodesk team had a window of time to do so. When Mangon visited the cathedral in 2022, the level of lead was still 10 times higher than what humans can be exposed to (all workers and visitors had to wear extensive protection equipment). The process of removing gave Autodesk the time they needed to build the model.

“We hired a company that had 10 to 12 people full time just creating the model for over a year,” Mangon said.

When the restoration teams were ready for the next stage, the BIM model was ready too.

“We saved them a lot of time,” Mangon said. “And they could use BIM and the value of it and instantly they had ROI.”

Adapting modern software to the 1200s

Built primarily in the 1100s and 1200s, Notre-Dame’s design and construction is much different than modern buildings created with the help of 3D design software. But that also means modern BIM tools find older buildings to be a bit of a challenge.

Revit has built-in intelligence and rules that help with making walls straight and aligning elements like columns, beams and the floor. These are typical features that usually make the user’s life easier, but they didn’t quite work for Notre-Dame.

“In Notre-Dame, nothing was straight. It was impossible to do anything,” Mangon said. “So we had to add capabilities to remove some of the logic in Revit to be able to support these kinds of projects in the future. Now we think that this type of technology could be used for a broader scope than just buildings from the last 50 years.”

Cutaway views shown at different layers of the model of Notre-Dame de Paris. (Image: The Public Establishment “Rebâtir Notre-Dame de Paris” and Art Graphique & Patrimoine.)

How the Notre-Dame BIM model was used

Once the restoration team had the BIM model, it was time to put it to use. Here are 4 ways it was used in the restoration:

Scaffolding

Notre-Dame’s repairs required extensive use of scaffolding inside and outside of the building. The cathedral is also covered in complex geometry that can be difficult to perfectly match with scaffold. The BIM model was an important resource for planning the temporary structure.

“They spent a lot of time using the model to design the scaffolding digitally. Every single bar, every location of the scaffolding was predefined months before it was installed,” Mangon said.

Planning construction

The crane at a construction site is one of the most critical pieces of equipment. The BIM model was used to ensure that the crane could be fully used on the job site and reach all deliveries, no matter if they arrived via boat on the Seine River or via truck on nearby roads.

“They used the model to know exactly on which day, which materials were arriving and where the truck needed to park,” Mangon said. “They simulated every minute of the construction process digitally.”

Planning with the BIM model extended to creating instructions for individual workers as well. Many of the processes used to originally craft Notre-Dame are not in use any more, requiring extensive planning with the tradespeople. The BIM model was a key tool at every phase of planning out their work.

Lighting

As a tourist hotspot, the original Notre-Dame had few chances to close and make changes. That meant that the historically dark building remained fairly dim. The restoration provided the opportunity to renovate the building, and specifically, its lighting.

To understand what kind of lights needed to be put inside and where they would be located, engineers again turned to the BIM model.

“It’s very easy in the BIM model to add a direct light or diffuse light, and put it on top, at the bottom, on the side, or wherever. You can really simulate the way it’s going to look. So that was a byproduct of also using BIM for the project,” Mangon said.

Restoration of surroundings

The restoration and reopening was also a chance to improve Notre-Dame’s surroundings. Autodesk’s team scanned the surroundings of the cathedral, creating a model that included the utilities and buildings in the area.

Full BIM model of Notre-Dame. (Image: The Public Establishment “Rebâtir Notre-Dame de Paris” and Art Graphique & Patrimoine.)

“[The city of Paris] used this digital twin for the architectural contest. So four different companies actually bid on the renovation of the surroundings, including a new museum, new parking, new areas, and they used the digital twin that we created,” Mangon said.

The future of Notre-Dame

Although Notre-Dame reopened this weekend, restoration work continues. The cathedral’s website reports that the restoration of the chevet and sacristy will happen in 2025 and installation of stained glass windows will occur in 2026.

The cathedral’s small stairs and tough-to-navigate spaces mean the BIM model will continue to be used for these projects and for future maintenance and restoration. To enable this, the massive digital asset will be given to the church in Autodesk Construction Cloud.

As the project nears its end, Mangon’s biggest takeaway from the restoration is that none of the work his team has done would have been possible without the initial scans of Notre-Dame.

“I think it’s important to scan historical landmarks,” Mangon said. “If a disaster happens… at least we have something to start with.”

He pointed to the importance of the movement to help preserve Ukraine’s cultural heritage by 3D scanning monuments. The war in Ukraine has destroyed numerous structures, but with these scans, they are not gone forever.

“In the future, technology could help bring to life some of these destroyed structures,” Mangon said.

The post The comeback of Notre-Dame appeared first on Engineering.com.

]]>
Why AEC firms should embrace AI https://www.engineering.com/why-aec-firms-should-embrace-ai/ Wed, 21 Aug 2024 20:21:28 +0000 https://www.engineering.com/?p=131080 AI becoming vital for being competitive in the AEC industry

The post Why AEC firms should embrace AI appeared first on Engineering.com.

]]>

For engineering firms, artificial intelligence (AI)- driven tools and other intelligent technologies are more than just a novelty or a luxury; they’re a near-imperative to keep pace in today’s highly competitive business environment, according to a newly released benchmarking report for the architecture, engineering, and construction (AEC) industries.

Findings from the 2024 edition of the AEC Inspire Report from Unanet, the business software company for which I serve as executive vice president for AEC, underscores just how important it is for firms to integrate technologies like AI across their operations, from business development to project execution to strategic planning. “One thing is certain,” the report asserts. “tech-advanced [AEC] firms that can harness the full potential of emerging technologies are the ones best positioned to accelerate growth, overcome challenges, and navigate the unknown. Such companies are not only operating for today; they are prepared for tomorrow.”

Based on survey responses collected this past spring from more than 330 senior-level AEC executives, the report (available for free download here) provides a revealing look at the trends, best practices, strategic priorities, and other dynamics shaping these three industries. It gives engineering firms the means to measure themselves against their peers across the industry.

AEC findings

The results highlight a strong sense of optimism across the AEC industries and an increasingly clear business case for firms to embrace technologies like AI. For example:

  • Most AEC firms feel good about the current business environment. A large share — 86% — of respondents hold an optimistic business outlook, and 42% say they’re “very optimistic.”
  • A winning business climate. Most firms, 58%, report a proposal win rate of more than 50%, while a much larger share, 72%, project a win rate above 50% for the year ahead, another sign of growing optimism. Those most confident in their future are firms that leverage technology because they are more likely to have keen insights into all aspects of their company’s resources, projects, and pipelines. These firms are better positioned to weather challenges and economic unpredictability while having greater confidence in their ability to forecast their business and manage resources.
  • Despite a generally positive outlook, 39% of AEC firms are concerned about the economy. Operational efficiency and talent recruiting and retention are other issues that are particularly concerning.
  • M&A (merger and acquisition) is on the menu, especially on the buy side. Half of surveyed AEC firms say acquisitions are of interest to their company in the year ahead, while just 5% are interested sellers. Among engineering firms, 40% say they’re interested buyers.

On the technology front, “it may be tempting to stay the course, to tackle change in slow increments,” states the report, “but this approach will not serve for much longer.”

Close to half of AEC firms — 48% — qualify as “tech-advanced” because they meet

at least three of the following criteria:

  • Data-driven, regularly using data for business management, decision-making, and performance assessment.
  • Cloud-dominant, with more than 50% of tools and applications based in the cloud.
  • Fully integrated, with complete integration of platforms and applications across all systems.
  • AI-mature, as active users of AI with comprehensive firm-wide policies and procedures in place to guide and govern AI usage.

More than half of AEC firms are using AI to some extent, while another one-third are open to using it but are not currently doing so. Our report reveals a strong business case for firms to implement AI:

  • Close to one-third of firms — 31% — are using AI with policies and guidelines in place as guardrails. However, 26% use AI without formal oversight policies, unnecessarily inviting legal, security, and compliance risks.
  • Architecture firms are twice as resistant to implementing AI as construction and engineering firms.
  • AI-mature firms are much more prolific project proposal producers, averaging 263 per year compared to 144 for less AI-savvy firms. They also win more projects and expect higher future win rates than less AI-savvy firms.

To deliver these kinds of benefits, AI requires firms to establish a strong foundation that includes not only internal policies to guide AI usage but also robust employee training on AI and high-quality data, underpinned by clear data stewardship policies. The report states, “Organizational data governance is foundational to AI implementation, and AI implementation is a must in today’s data-driven reality.”

Findings Specific to Engineering Firms

Engineering firms show deep concern about the current state of their workforce. Compared to their counterparts in architecture and construction, engineering firms struggle more with recruiting and more frequently list recruiting as a top human resource challenge. Although they share the AEC industry’s overall sense of business optimism, the workforce issue is pressing enough for many to turn down work for want of labor. As the report notes, firms can attract and retain talent by offering employees access to cutting-edge technology in their day-to-day work and by partnering with local colleges and trade schools.

A lack of sophisticated forecasting practices exacerbates the talent shortfall. Engineering firms most frequently rely on Excel spreadsheets to forecast labor resources and are less likely to be able to predict their growth rate. Troublingly, one-third of engineering firms say they cannot project their growth for the coming year.

Engineering firms also appear deliberate in adopting AI and supporting AI policies. Less than one-quarter of those we surveyed said they’re using AI with policy guardrails in place. As for the areas in which they expect to realize the most benefit from using AI, data analysis and content generation top the list.

Just how important are AI and digital technologies generally to success? For engineering firms, the report concludes, “Technological transformation is essential to maintaining competitive footing and operational resilience in the face of a growing talent shortage.”

About the author

Akshay Mahajan is Executive Vice President, AEC, at Unanet, a company that creates business software solutions for architecture, engineering and construction firms, and government contractors. For more information, visit https://unanet.com/.

The post Why AEC firms should embrace AI appeared first on Engineering.com.

]]>
Why are we still using 2D CAD? https://www.engineering.com/why-are-designers-engineers-and-architects-still-using-2d-cad/ Thu, 01 Aug 2024 17:29:47 +0000 https://www.engineering.com/?p=52634 Drawings still rule despite 3D’s overwhelming advantages.

The post Why are we still using 2D CAD? appeared first on Engineering.com.

]]>

They stab it with their steely knives
But they just can’t kill the beast.
—-Eagles, Hotel California, 1977

3D may have won the war against 2D, but pockets of resistance continue to fight on. It was supposed to have been a quick war. With all the advantages the forces of 3D had on their side, surely 2D would surrender quickly.

I started teaching 3D CAD in 1989 after realizing the enormous potential of 3D. I evaluated and implemented enterprise CAD programs from McDonnell Douglas (which have evolved into NX) and Applicon Bravo (now extinct) in my first full-time position as an engineer. Both programs ran on expensive workstations connected to DEC’s VAX minicomputers. I taught design, engineering fundamentals and CAD at a community college using AutoCAD, a program that was then, as now, known primarily for its 2D. I wrung as much 3D out of AutoCAD as possible, modeling in 3D wireframe and solid modeling. AutoCAD had licensed ACIS and was able to do CSG (constructive solid geometry) and through a combination of primitive shapes, hundreds of blocky, useless parts were born. I was betting on 3D as the imminent future. I hedged my bet by showing how to make front, top and right views. With view creation being push-button easy, I had to devote only a few classes to 2D.

Drawings hang on the wall of my house being remodeled.

But 2D still hangs in there… literally. I have just hung D-size drawings to the wall for the contractors remodeling my house. They’re familiar with drawings. With CAD, not so much. With 3D, forget about. A year ago when I started planning the project, I dreamt of seeing the remodeled space in 3D, fully rendered and in VR. This was a wake-up call.

Rumors of my death have been greatly exaggerated.
—-Mark Twain, acting the part of 2D

A rush to cut

I have no doubt that the ease with which a pencil can be put to paper to produce drawings is responsible for a perceived speed advantage. A colleague of mine and a contemporary from the drafting table era dismisses CAD, and 3D CAD in particular, continually goads me with “If CAD is so great, how come I can sketch a part on a napkin and take it to the shop before you can start your CAD program?”

Any argument to the contrary falls on deaf ears. I know 2D’s speed to be a false promise. Many a time have I rushed into the shop with such a sketch only to cut parts and regret I haven’t drawn to scale, cut to the wrong dimension, discover detail not detailed or run out of material or parts. I totally understand the hurry to get started, to start cutting.

A little history

2D champions may argue that it a natural form of documentation since we have been drawing in 2D as long as we have been on Earth. They draw support from the cave painting of a pig hunt in the caves on the Indonesian Island of Sulawesi said to be over 50,000 years old. However, 3D predates cave paintings by eras. Archaeologists found a 3D likeness of a person that is older than cave paintings by hundreds of thousands of years. That was the Tan Tan statue, which broke the $100 million barrier for statues of any age in a 2014 auction. It is estimated to be 300,000 to 500,000 years old.

The tail wags the dog

The way to describe an object so it can be manufactured, conveyed to others, or recorded for posterity, is influenced in no small part by the medium at hand. For as old as civilization, the medium has been 2D, such as the wall of caves, stone tablets, papyrus, parchment, scrolls … Maps show the Earth as flat. So pervasive became 2D over millennia that it may have distorted our reality. How else to explain those who saw the Earth as flat long after it was proved otherwise? The world we show as flat becomes the flat world we live in. Ergo, our walls, buildings, roofs … all are flat.

3D is more natural, but it doesn’t matter

Bam! That’s the gavel coming down. 3D is the natural way of describing an object. But no judgment will convince everyone to put down their pencils or switch their CAD to 3D mode.

Millions of 2D practitioners make 2D projections of 3D objects. For them, 2D is not an abstraction of reality but the simplest, most elegant and most efficient way to describe an object. With its set of standard views, it is systematic and complete.

Making 2D views using construction lines in AutoCAD.

Orthographic projection may not come naturally to all, but what does not come naturally can be taught. Those who can’t get the picture are taught to create the conventional views (front, top and right views for a mechanical part or the plans and elevations of a building) using construction lines. It’s a science, not an art.

Once learned, often painstakingly, 2D drawing and drafting can become second nature, and like every lesson learned after much time and effort, it becomes worthy of retaining and repeating, not casually abandoned. Abandoning 2D would be tantamount to admitting time and effort were wasted learning an unnecessary skill.

Or is it the Stockholm Syndrome, us in love with a master who was once cruel to us, but one we now depend on and is our only hope of survival?

2D will, no doubt, continue while its practitioners still have breath in their bodies and perhaps a generation afterward. There’s just so much of it in the system. And so we are bound to ask for a while longer, “Why will 2D CAD not go away?”

The post Why are we still using 2D CAD? appeared first on Engineering.com.

]]>
A first look at TestFit Generative Design https://www.engineering.com/a-first-look-at-testfit-generative-design/ Mon, 01 Jul 2024 18:55:22 +0000 https://www.engineering.com/?p=52135 TestFit thinks of all possible apartment and parking layouts. Then architects pick the best one.

The post A first look at TestFit Generative Design appeared first on Engineering.com.

]]>

Dallas-based TestFit has been around since 2015 to help designers plot multi-tenant dwellings (apartment buildings) on parcels of land. Their software is meant to take greenfield sites and provide multiple (think thousands) layouts to help design and development firms maximize future rental revenue — all within specified constraints.

Site layout for apartment houses has traditionally been done by an experienced architect who listens to owners and devises one or two options that will ideally balance the revenue potential with aesthetic ideals. TestFit takes a more prosaic approach: it creates thousands of layouts based on the lot shape. And TestFit is not confined to apartments. It can also lay out:

  • Retail spaces
  • Modular housing
  • Industrial spaces
  • Data centers
  • Hotels
  • Office parks
  • Parking lots
  • Structure
  • Precast structures

Results can be sorted and filtered by:

  • Units
  • Average unit area
  • Parking ratio
  • Net rentable square feet
  • Site efficiency
  • Height of building
  • Floor area ratio (FAR)
  • Unit density (dwelling units per acre)
  • Site coverage

With TestFit, the architect is supplied with what appears to be every possible layout and spared the drudgery of drawing each one. They are spared from having to count spaces in the parking lot, which is a welcome relief, but also from varying things such as building orientation, courtyard size, shared spaces, elevator location, and so on. All possible layouts are automatically drawn and sorted. The architect must look through the various layouts and pick the best ones to show the client.

TestFit announced this somewhat automated process on June 26 with TestFit Generative Design, which will be available in July. It’s an aid that architects will most surely welcome. Had they at all been wondering how AI could have benefited them — or conversely, do away with their jobs — TestFit answers? This isn’t Hey, AI, Finish this Building that would make them superfluous, but a force multiplier that would make them indispensable. With Generative Design, TestFit has taken multi-dwelling space planning to the next level.

As the industry anxiously awaited computers finally putting the aided into computer-aided design, getting a TestFit demonstration was a must.

However, there are terms for those who expect TestFit to come with AI. To call it AI and generative design (either TestFit’s product or generative product design) is a stretch. The present definition of AI involves machine learning, and (as TestFit’s generative design doesn’t use machine learning) it technically doesn’t qualify as AI. But if users allow for a broader definition to include creative design, TestFit certainly contains the spirit of AI.

Our demo shows a breathtaking number of layouts (about 3,000) generated in about three seconds. Was there a supercomputer behind the curtain? Nope. Just an ordinary Dell laptop, we’re told. Thousands of layouts are too much, and a flip book of plans is useless. However, TestFit ranks the layouts according to design engineers criteria, such as the number of apartments, in case users want the most common way of maximizing rental income. Of course, if the landowner thinks an enormous courtyard with bigger though fewer apartments would be ideal, the parameters can be adjusted to suit. What TestFit won’t do is make aesthetic judgments — and therein lies the added value of the architect, now free of the struggle of making layouts.

Never count parking spaces again. TestFit lets designers configure parking spaces on the fly and automatically counts the parking spaces. Image: TestFit.

Generative Design layout are in TestFit’s proprietary format but it can also output to Revit and SketchUp.

Never has there been a generative design program that seems so easy to use. Generative design programs intended for product design subject users to an interface with unfamiliar terminology borrowed from optimization and simulation, specifically finite element analysis. TestFit takes care to use terms familiar to architects and designers. The company is to be commended on the UI which is simple, but not overly so. The easy-to-understand parameters are arranged neatly on one side. Results are listed graphically as a horizontal bar graph, with each bar representing a layout, the length of each bar corresponding to how well it meets the selected criteria (such as floor area ratio). Most of the screen is reserved for the layout itself.

So unlike other generative design programs is TestFit Generative Design that the company ought to consider another name. Why use a product name that is literally associated with sub-optimum optimization (most of its “solutions” are ridiculous and/or impossible to produce) and a technology that has floundered in the marketplace.

The price of admission has not yet been revealed but you can expect it to be competitive with Autodesk’s Forma, perhaps the best known space optimizer, which is going for $1,500 a year or $185 a month.

You can find out more about TestFit Generative Design and get on a waiting list here: testfit.io/generative-design.

The post A first look at TestFit Generative Design appeared first on Engineering.com.

]]>
Bluebeam unveils Auto Align for overlay and comparison of drawings https://www.engineering.com/bluebeam-unveils-auto-align-for-overlay-and-comparison-of-drawings/ Thu, 06 Jun 2024 05:43:00 +0000 https://www.engineering.com/bluebeam-unveils-auto-align-for-overlay-and-comparison-of-drawings/ Revu uses AI to eliminate the need for manual comparisons and overlays.

The post Bluebeam unveils Auto Align for overlay and comparison of drawings appeared first on Engineering.com.

]]>
(Image courtesy of Bluebeam.)

(Image courtesy of Bluebeam.)

In the architecture, engineering and construction (AEC) industry, accuracy and efficiency have often been hard to balance with productivity, especially when it comes to CAD drawings. Thanks to its focus on developing artificial intelligence (AI) solutions, Bluebeam’s recent release of Revu 21 comes with AI-based Auto Align, making accuracy, efficiency and productivity more easily attainable.

Regardless of the project, comparing drawings remains a vital part of any construction project. Changes happen frequently, creating a host of tasks, such as organizing the drawings, ensuring correct versions and tracking modifications. Thanks to digital innovators, the days of manually looking over drawings with rulers and highlighters became a thing of the past.

Prominent Bluebeam Revu features include Compare Documents and Overlay Pages. Compare Documents streamlines the process of highlighting the differences between documents through content analysis, which then provides a report of any discrepancies. The use of a color-coded system minimizes the manual time spent identifying changes. Overlay Pages provides a way to superimpose PDF versions of a document, which shows the differences via visual representation.

Although these tools have propelled revision processes forward, they still required manually aligning three points for every revision. This process was still tedious and opened the door for errors, as well as took several minutes to complete. Auto Align, which uses AI, changes that. According to Bluebeam, Auto Align allows users to determine differences in pages and documents in approximately 15 seconds—80 percent faster than previous methods. Considering the company noted Compare Documents and Overlay Pages are used more than 5 million times annually, this new development has the potential to vastly speed up projects while ensuring accuracy. Users simply click on the Auto Align tool, and AI completes the rest of the task.

Revu 21 has another AI boost, thanks to the search feature that adds Preferences. It works to minimize the impact on workflows by allowing users to locate options and features easily. Another feature that eliminates potential errors from manual input or wasted time is Automatic Title Block Recognition. Once a drawing is imported into Bluebeam Cloud, it automatically extracts and saves key information that may be needed throughout a project’s lifecycle.

Along with incorporating AI into Revu, Bluebeam created a collaborative workspace in Bluebeam Cloud, called Bluebeam Labs. Users in the AEC industry can sign up to work closely with Bluebeam to further innovate with AI, such as the first project that involved positioning 2D drawings in a 3D environment.

Although AI has the spotlight, Revu 21 also has some non-AI updates. Bluebeam Anywhere allows users to access Revu on up to five devices with one ID, making it easy to make markups on the go. A new Multiply feature enables quicker scaling of essential elements, while Markups List enhances communication by displaying custom measurement captions. Users can find more information on the latest fixes in Revu 21 here.

The post Bluebeam unveils Auto Align for overlay and comparison of drawings appeared first on Engineering.com.

]]>
What is photogrammetry? https://www.engineering.com/what-is-photogrammetry/ Thu, 30 May 2024 19:42:00 +0000 https://www.engineering.com/what-is-photogrammetry/ Photogrammetry was a potential game-changer, and images are of low quality.

The post What is photogrammetry? appeared first on Engineering.com.

]]>

Image: Polyscan

Image: Polyscan

Photogrammetry is the method of using photographs to make 2D and 3D computer models.

Photogrammetry’s two biggest advantages are ease of use and easy access. Everybody has a digital camera and a 3D computer model can be created simply by pushing a button to process the photos (usually on the cloud) into 3D models.

How does photogrammetry work?

You take a lot of photos of an object, be it a part, product, building, structure, project site, etc., from different angles, going all around it, and input them all into a photogrammetry application which will stitch them all together to make what is essentially a 3D image.

Like the 2D images you took with your camera, the photogrammetry image looks like the object but is not to scale. You need to scale the 3D image up or down so it is dimensionally accurate. For example, if a room is measured at 25 feet, but the photogrammetry image has it at 25 inches, you will need to scale it up 12 times.

From then on, you can use the photogrammetry model for measurement.

What are alternatives to photogrammetry?

Photogrammetry is one type of “reality capture” software. Another reality capture method is LiDAR. LiDAR is more accurate. LiDAR generates a point cloud, which is not as user-friendly as a photogrammetry image. Each point could be at mm-level accuracy (depending on the LiDAR sensor), but LiDAR hardware is usually more expensive and LiDAR systems are typically more difficult to use.

What are the applications of photogrammetry?

Photogrammetry can make 3D models for reverse engineering, orthomosaic and symbolic maps, GIS software layers or triangulated meshes of building site.

If you have used Google’s Street View, you have used photogrammetry. Google uses specially designed vehicle-mounted cameras that take multiple photos as the vehicle moves down the street. Google’s servers stitch the pictures together to create a 3D image of the roads.

Bridge inspection can be done using cameras mounted on drones flying over and under bridges and with photogrammetry, bridge inspectors can inspect a 3D model of the bridge in the comfort and safety of their office.

In civil engineering, construction, mining, and waste management, photogrammetry models can monitor project progress by analyzing the volumes of cuts, fills, and piles.

Photogrammetry can be used to produce nicely rendered 3D images of historical artifacts for museums. When online, these images can make a museum’s holdings accessible to the public and scholars worldwide.

The accuracy issue

Photogrammetry, which creates 3D images of smaller objects, say on a one-foot scale or less than one meter scale, has been slow to catch on, perhaps due to accuracy issues. All CMMs (coordinate measuring machines) use lasers, not photos, for precise measurement.

How about the quality?

What a mess! Quick images created with photogrammetry have disappointed me.

What a mess! Quick images created with photogrammetry have disappointed me.

The quality of photogrammetry’s results depends on several factors, including the number of photos taken (25 to 200, the more the better, according to one vendor), the quality of the images, the software’s ability to detect edges and features, and finally, in its ability to seamlessly stitch the photos together. Lured in by stunning 3D images, seamless and colorful scenes or carefully staged interiors created by photogrammetry vendors or professionals, beginners will be disappointed by their results, usually a ragged and blotchy 3D image with significant gaps (where the camera did not see), shapes that appear in the image that were not there (such as a mysterious undulating black shape on the left of the picture above), objects not intended to be captured (such as those in the background of a product shot) or objects missed altogether or with parts of objects the software is somehow unable to resolve. Interloping background objects can be cropped using 3D volumes but a ragged or missing surface could mean a re-shoot.

Photogrammetry is a technology with much promise, but after multiple attempts with many different consumer-grade apps, I have yet to create a photogrammetry image worth keeping.

The post What is photogrammetry? appeared first on Engineering.com.

]]>
SketchUp lets you scan a space with an iPad https://www.engineering.com/sketchup-lets-you-scan-a-space-with-an-ipad/ Fri, 24 May 2024 14:33:00 +0000 https://www.engineering.com/sketchup-lets-you-scan-a-space-with-an-ipad/ Trimble implements Canvas.io technology to create SketchUp models

The post SketchUp lets you scan a space with an iPad appeared first on Engineering.com.

]]>
New Scan to Design Feature lets you scan a space and will create a SketchUp directly. Image: Trimble.

New Scan-to-Design feature in SketchUp lets you scan a space and automatically creates a SketchUp model. Image: Trimble.

SketchUp is the first CAD program to directly use the LiDAR from consumer
devices. Trimble, the company that owns SketchUp, announced that it will use the built-in LiDAR on Apple’s giant iPads (iPad Pro) to create a SketchUp model from a scan. You may not have an iPad Pro, but after reading about how
fast and easy it is to make 3D models of buildings and interiors, you may very well want one.

Suppose you are a general contractor, interior designer, floorer, real estate agent, broker or DIYer… let’s say anyone who needs a floor plan or a 3D model, anyone who has had to use a tape measure to lay out an as-built, a homeowner who needs a floor plan for a permit and doesn’t want to hire an architect… All those that have had to spend hours taking measurements can
all now pretty much wave an iPad at the walls and voila! Up pops a 3D SketchUp model. Modeling of existing spaces and as-builts has never been easier.

Accuracy or speed? Pick one. 

Here is the  LiDAR scanner on the latest model of the iPad Pro. one 12 Pro. Image: Apple.

Here is the LiDAR scanner on the latest model of the iPad Pro. Image: Apple.

Okay, it’s not quite
as quick as implied. It takes a few minutes to paint the walls, as it were, with the invisible LiDAR beam of the iPad. It takes a few more minutes to process the LiDAR data
(Trimble does that on the cloud). You have to have a few things in place, like a license for SketchUp. And, of course, you must have a recent model iPad Pro, at least a 5th generation. They’re the most expensive tablets you can buy. The most recent and most significant, the 13″ model iPad Pro 6th generation, was recently introduced with a starting price of $1299. Then there are models with extra RAM models, the Pencil, the keyboard, insurance… and the cost can climb to over $3K.

Clearly, the iPad Pro can’t be a toy; it must be a tool. Then, think of it not as a very expensive tape measure but a very reasonably
priced scanner. Trimble’s professional quality laser scanners, for which it is
famous, can be 10 to 20 times as much.Think of it as CAD operator, but one you buy and keep. Much to your delight, you will find that it can do 2D
and 3D, unlike most CAD operators, for whom 2D is their only space.

We have looked at several apps that use Apple devices, including Polycom (our current favorite for its interface and one-price-all-you-can-eat-for-one price: $99.99/year), CamToPlan,& Canvas (used by SketchUp), and Magicplan/a>.
All these applications are most impressive for their ability to recognize walls, ceilings, windows, and doors
using Apple’s built-in
RoomPlan& framework and
its LiDAR hardware. Canvas.io and SketchUp ups the ante, acting in concert to tag objects.
Scan-to-Design, as this feature is called in SketchUp, is smart enough to recognize and tag furniture, such as tables, chairs, and sofas, storage (closets), appliances like refrigerators, stoves, washers and dryers, dishwashers, TVs, sinks, toilets, and bathtubs. These are stored as blocks and can be moved, replaced, etc., as single objects.

Accuracy or speed. Pick one. 

All the LiDAR apps for Apple devices work by pressing a record button which
immediately lets the app gather the walls and other features in an interior space. The best of them (Polycam) let you see the 3D model as you scan,
detecting edges and corners for walls, furniture, visually verify your LiDAR coverage with an inset image of a 3D model. It’s magical and fun. Others, such as Canvas and SketchUp implementation of it, cover the scanned surfaces with a 3D mesh or attempt a fully rendered 3D model. The 3D mesh and the rendered 3D model will easily confirm areas that have been scanned so you
don’t miss a spot. Still, the mesh and rendered model move around, quivering
like Jello, seemingly constantly updating, which is a little distracting compared to the neater, more precise-looking, more stable 3D model
Polycam uses. 

Still, both are a welcome alternative to a tape-measure and sketch approach in
widespread practice today. At the end of your scan, you may finally use a tape measure (or laser measure) to check one or two dimensions to confirm the miracle: you have captured all the details of a space without a single measurement until
it was all over.

In all fairness, you will find your tape measure to be more accurate than the Apple LiDAR models,
which can be off 1 to 2%, according to Canvas.io. Over a 50 ft length, that can
be a whole foot. Don’t stop reading. The apparent advantage of accuracy over tape or laser measure is at least partially offset by building inaccuracy.
Walls are rarely completely square to each other. A wall-to-wall measurement on
one side of the room could be different than on the other side. A single
measurement, even with a laser measurement, could give you a false sense of
accuracy. A scan, however, will accurately capture the out-of-square in the as-built
with millions of measurements. But most importantly, even with drift error from
room to room, whatever you have lost in accuracy, you have gained in time. It can take hours to measure up each floor and make a CAD drawing, something you have accomplished in minutes by scanning. 

For trades in which utmost accuracy is required, such as flooring, which may
need to cut to the 16th of an inch, it would be wise to use tape or laser measurement.

Scanning in SketchUp

The Scan-to-Design command is hidden in the Other Tools button in the latest release of SketchUp for the iPad. Click on it and press the button. Point it at a wall to activate the Canvas technology. You will see the giant iPad screen fill up immediately with a 3D mesh that drapes over walls, doors, windows, furniture…

Here are the settings for SketchUp's new scanning feature.

Here are the settings for SketchUp’s new scanning feature.

You can vary the resolution of the grid from 2 to 12 inches. You move the iPad up, down, and sideways
in a sinusoidal path, more or less, watching the grid drape over all the iPad sees.
Don’t trip over the furniture. Move too fast and you get a warning. If you
haven’t turned on all the lights you might end up in a dark hallway and the
scanning will lose its place and you will have to start over. Had you read the instructions, you would have known to clear the path of obstructions and turn on all the lights. Oops. It’s okay to overlap as you
move about. A little overlap is encouraged in these scanning apps but too much overlap can be confusing. The app
may not register the end of the scan with the beginning due to drift error
accumulating over the length of the scan in a really big space but you can
correct that in SketchUp.

The Scan-to-Design feature, like in the other scanning apps, does an excellent job of guessing where corners are, even able to determine the existence of corners hidden behind furniture or clutter, by inferring that a corner
at ceiling level has a similar corner on the cluttered floor. So smart! This is
a big advantage over tape measurement — you don’t have to move the furniture to stick in a tape measure.

It's a mosaic more than a 3D photo. You'll have to hide it to see the 3D SketchUp model.

It’s a mosaic more than a 3D photo. You’ll have to hide it to see the 3D SketchUp model.

When you are done scanning, you tell Scan-to-Design so
by pushing a button to upload the mesh to the cloud where it will be; after a minute or two for larger scans, but without any effort on your part, all that has been captured is turned into a SketchUp model. It may not look like a SketchUp model at first. You will see a ragged and blotchy
mosaic that if you squint, looks like a 3D photo. This is common to most consumer-level scanners which are unable to seamlessly stitch and iron out the micro images captured by the
scanner. No matter. The offending 3D mosaic comes in on its own layer. Turn the visibility of the layer off, and viola! Your  SketchUp model
pops into view. 

The SketchUp model appears when you turn off the layers for the mosaic.

The SketchUp model appears when you turn off the layer for the
colored model.

You can work with the as-built room or building like any other SketchUp model
— because it is one. Dan Scofield of Trimble, who gave us this demo, used his hotel room as an example,
capturing it, then upgrading it by switching out the furniture (using models from SketchUp 3D Warehouse) and moving walls
for a better layout. The app performed like a pro in Scofield’s video of the Scan-to-Design scan. It was less fluid in my hands
— no doubt because I tried it on an Apple iPad over three years old. (I have
since ordered the latest model and expect it to perform flawlessly, even faster
than was demoed since the latest model has the Apple’s latest processor, the
M4.)

Since SketchUp recognizes objects, such as beds, furniture, etc., they can be easily swapped out. Upgrade your hotel room by switching out furniture and rearranging walls for better space utilization.

Get yourself a
room upgrade. Since SketchUp recognizes objects such as walls, beds, furniture, etc., from the scan, the objects can be easily moved and swapped out. Here is the same  hotel room but with furniture rearranged and walls moved for better space utilization.

The Scan-to-Design feature in SketchUp may have finally succeeded in removing
tape measurement and floor plan sketching in the workflow of architects,
interior designer and contractors, and hopefully launched a whole generation into 3D
and BIM.

The post SketchUp lets you scan a space with an iPad appeared first on Engineering.com.

]]>
AEC Industry Tech Trends: What to Invest in Now, Next and Later https://www.engineering.com/resources/aec-industry-tech-trends-what-to-invest-in-now-next-and-later/ https://www.engineering.com/resources/aec-industry-tech-trends-what-to-invest-in-now-next-and-later/#respond Wed, 15 May 2024 20:59:13 +0000 https://www.engineering.com/resources/aec-industry-tech-trends-what-to-invest-in-now-next-and-later/ A practical guide to understanding, evaluating and implementing AEC industry standards

The post AEC Industry Tech Trends: What to Invest in Now, Next and Later appeared first on Engineering.com.

]]>
In a post-COVID world, we have seen a lot of new technology trends arise in the architecture, engineering and construction (AEC) industry – as well as new challenges, which can be somewhat daunting for a firm to navigate. From artificial intelligence and generative design, big data, project management, and laser scanning and reality capture, which technologies should you invest in now and what should you be prepared for in the future?

This white paper will help provide guidance on how your firm can go about it practically, evaluating and successfully executing the adoption of the technology you need to progress and stay competitive in the market.

You will learn:

  • Top challenges facing the AEC industry
  • Technology trends in AEC, such as artificial intelligence (AI) and big data
  • Practical steps to take when implementing digital technologies

Your download is sponsored by ARKANCE.

The post AEC Industry Tech Trends: What to Invest in Now, Next and Later appeared first on Engineering.com.

]]>
https://www.engineering.com/resources/aec-industry-tech-trends-what-to-invest-in-now-next-and-later/feed/ 0