Interview with HKU Professor Xuguang Wang on AI Ethics in Civil Engineering
Discussed AI's civil implications, inclusion of vulnerable groups in cities, issues on accountability, data privacy, and future needs
INTERVIEWS
YAFI Interview team
4/17/2026


Interview held on April 7th, 2026
Abstract:
This interview highlights a fundamental gap in AI development within civil engineering: while technological innovation is rapidly advancing, ethical considerations and social welfare remain underprioritized. Professor Xuguang Wang emphasizes that this is a structural issue. Engineers, driven by efficiency and implementation, cannot simultaneously ensure ethical governance. As a result, effective AI oversight requires cross-disciplinary collaboration and government-led standards, similar to existing infrastructure safety codes.
In practice, AI is used to enhance efficiency rather than replace human labor, but its safe application depends on the careful calibration of trust. Without sufficient domain knowledge, users default to either over-reliance or complete rejection of AI. Therefore, responsible AI integration lies in balancing machine autonomy with informed human oversight, ensuring both efficiency and accountability.
Full Interview Brief:
In a recent interview with the Youth AI Future Institute, Xuguang Wang, an assistant professor at the University of Hong Kong, offered a grounded and technically candid perspective on the current state of artificial intelligence in civil engineering and highlighted both its transformative potential and its systemic ethical blind spots.
Professor Wang, whose research focuses on digital twins and surrogate modeling for simulating urban systems, began by reframing a common misconception: civil engineering is not primarily about building cities, but about maintaining and operating them. While construction is only an initial phase, the long-term challenge lies in sustaining infrastructure efficiency, safety, and functionality. In this context, AI is increasingly being integrated into operational systems, ranging from transportation optimization, such as routing algorithms used in ride-hailing platforms, to energy management systems that predict consumption and allocate resources more efficiently. Professor Wang also highlighted his own work using AI to model indoor airflow and temperature distribution, enabling more precise environmental control while reducing energy waste.
Despite these advances, he emphasized that AI in civil engineering remains in a relatively early and cautious stage. Unlike fields with higher tolerance for error, civil engineering operates under strict safety constraints, where failure can have immediate and severe consequences. As a result, AI is largely confined to assistive roles. For example, while generative models may propose initial structural designs, engineers do not fully trust these outputs; instead, AI serves as a preliminary tool rather than a final authority. “We don’t fully trust AI at the moment,” he noted, underscoring the field’s reliance on human verification.
When the discussion turned to AI ethics—particularly inclusivity and the impact on vulnerable populations—Professor Wang’s response was alarming. He acknowledged that civil engineering, and engineering fields more broadly, have not sufficiently addressed these concerns. Most research, he explained, focuses on improving efficiency, speed, or structural performance, with little attention given to how AI systems affect disadvantaged groups such as the elderly or individuals with disabilities. “Very few people think about this,” he admitted, adding that even in academic literature, discussions on ethical distribution or social welfare are rare and systematically inevitable.
That said, Professor Wang responded to YAFI’s Chief Creative Director Anthony Lim that some indirect efforts exist. Wang pointed to research using computer vision to monitor and reduce construction noise in densely populated areas, where elderly residents near the sites or roads are particularly sensitive to disruptions. By predicting noise levels and issuing warnings when thresholds are exceeded, such systems aim to mitigate health impacts for a vulnerable social group. Similarly, AI is being used to improve construction site safety by identifying hazards and optimizing work schedules based on weather conditions. However, Professor Wang characterized these efforts as very primitive, indicating that ethical considerations are still peripheral rather than central to engineering design.
At the core of this issue, he argued, lies a structural limitation: engineers are primarily focused on realizing and optimizing technology, not on governing its societal consequences. Ethical oversight, therefore, cannot be expected to emerge organically from within technical disciplines. YAFI’s president Ethan Ha added how such a structural tendency of an engineering field to produce the most efficient, distributable, and low-priced technologies is also what limits extra ethics considerations and additional research and investment pertaining to safety or inclusion of the technical innovation created. He therefore asserted that society seems to be at a juncture where we struggle to hit two birds, technical innovation with an engineering-geared priorities, and thoughtfulness on safety and ethics of AI, with one stone.
Instead, Professor Wang stressed the need for cross-disciplinary collaboration, supported and coordinated at the governmental level. Drawing an analogy to the development of the internet, he explained that while engineers create foundational technologies of the world wide web and the applications services through programming the back and frontend, the broader societal implications, and the content, norms, and risks that follow, are shaped by a much wider group of “internet-consuming”, user-stakeholders. “The people who spot ethical issues and raise awareness of the dangers of the technology are not the only people who create it, but those who use it” he said. “When you talk about safety and ethics, far more people need to be involved.”
This divide becomes particularly evident in the question of human accountability raised by Secretary Kayoung Choi. In civil engineering, responsibility remains sufficiently human-centered. Even when AI is used to accelerate processes, such as checking structural designs against safety codes, final approval cannot be delegated to machines. Professor Wang described how AI could reduce months of manual verification work to minutes, yet engineers must still review the results to ensure reliability. Without sufficient understanding of how AI systems function, he warned, users are left with only two extremes: complete trust or complete rejection. Neither, he suggested, is viable. Instead, responsible use of AI depends on informed oversight, where humans understand both the capabilities and limitations of the tools they employ.
This leads directly to what Professor Wang identified as the central solution: education. As AI adoption accelerates, he observed a growing divide between those who avoid the technology and those who over-rely on it. Younger users, in particular, tend to depend heavily on AI systems, often without critically evaluating their outputs. To address this, Professor Wang advocated for early and carefully structured AI education. While young students may not need to understand the full technical details, they should at least recognize that AI is not a “magic” system, but a human-created model with inherent limitations. More advanced understanding, including basic principles of how models generate predictions, should be introduced at later stages of education, likely during high school and college to enable responsible use with technical awareness.
Finally, YAFI’s Chief Operating Officer Doha Yeo, addressed one of the most unresolved challenges in AI deployment: data privacy. Although civil engineering does not rely as heavily on personal data as fields like medicine, certain applications, such as drone-based building inspections, inevitably raise privacy concerns. Professor Wang described how drones used to detect structural damage can unintentionally capture images of residential interiors, leading to complaints from occupants. Currently, there are no clear global standards governing how such data should be collected, stored, or used. In practice, this uncertainty has led some researchers, including Professor Wang himself, to avoid projects involving sensitive data altogether. “There is no very clear guideline yet,” he said, highlighting a significant regulatory gap.
Reflecting on the broader implications of AI development, Professor Wang concluded with a striking admission: despite being deeply involved in advancing these technologies, even experts remain uncertain about how best to use them. “We create this technology, but we don’t know how to use it,” he said.
In a field where safety, efficiency, and human welfare intersect, this uncertainty is not just a technical challenge: it is a societal one.
