Researchers are advancing the representation of building semantics to enhance AI training in architecture, engineering, construction, and operation (AECO) sectors. A study by Suhyung Jang and colleagues proposes a novel method using large language model (LLM) embeddings, such as OpenAI GPT and Meta LLaMA, to capture nuanced distinctions among 42 building object subtypes. This new approach outperformed traditional one-hot encoding, achieving a weighted average F1-score of 0.8766 compared to 0.8475. Utilizing GraphSAGE for classifying high-rise residential building information models (BIMs), the findings suggest that LLM encodings significantly improve AI’s understanding of complex building data. By leveraging LLMs, AI can more accurately interpret building semantics, crucial for informed decision-making throughout construction projects. As LLMs and data reduction techniques evolve, their potential application across AECO workflows may enhance automation from design validation to operational efficiency, promising improved construction management and compliance checks. For in-depth insights, refer to the research on enhancing building semantics in AI.
Source link
