AI use must ensure safety, not cause harm to human life

March 13, 2026 - 08:15
The requirement is set out in Circular No. 05/2026/TT-BKHCN, recently signed and promulgated by Minister of Science and Technology Nguyễn Mạnh Hùng.
A woman uses an AI robot to assist with administrative procedures in Cửa Nam Ward, Hà Nội. —VNA/VNS Photo Nguyễn Thắng

HÀ NỘI — The development and use of artificial intelligence must ensure safety, reliability and must not cause harm to human life, health, honour, dignity or mental well-being.

The requirement is set out in Circular No. 05/2026/TT-BKHCN, recently signed and promulgated by Minister of Science and Technology Nguyễn Mạnh Hùng.

The circular introduces the national artificial intelligence ethics framework, which aims to guide the research, development and application of artificial intelligence (AI) in a manner that is safe, responsible and aligned with the interests of individuals, communities and society.

The issuance of the framework helps operationalise the orientation outlined in Resolution No. 57-NQ/TW of the Politburo on making breakthroughs in science and technology development, innovation and national digital transformation, as well as the Law on Artificial Intelligence, which took effect on March 1, 2026.

Under the circular, organisations and individuals are required to design AI systems with safety built in from the outset, identify potential risk scenarios in advance and establish appropriate preventive measures.

They must also set quality criteria for data, models and outputs, and develop internal testing, verification and validation mechanisms before deploying AI systems.

The framework also requires that human oversight and intervention be ensured in all decisions and actions made by AI systems, in line with the level of impact such systems may have.

In addition, organisations and individuals must establish mechanisms to receive feedback, detect errors and address problems, as well as contingency plans in cases where systems malfunction or are misused.

System security measures must also be implemented to prevent, detect and stop unauthorised access, system takeover, data poisoning, model poisoning, adversarial attacks, vulnerability exploitation, data leakage or other forms of misuse.

The measures are intended to safeguard the confidentiality, integrity and availability of data, models, algorithms and related infrastructure.

The framework also stipulates that the development and use of AI must respect human rights and citizens’ rights, while ensuring fairness, transparency and non-discrimination.

Accordingly, organisations and individuals must adopt appropriate review measures to ensure that AI systems do not infringe upon privacy, personal data, freedom of choice, the right to access information, the right to equal treatment and other lawful rights as prescribed by law.

At the same time, developers and operators must identify and minimise potential biases in data, models and system operations, while fully considering the impacts on vulnerable groups such as children, the elderly, persons with disabilities and disadvantaged communities.

Organisations and individuals are also required to provide appropriate notification regarding the use of AI, including reasonable information on the system’s objectives, scope, data sources, general operational methods and limitations, in order to avoid misleading users about its capabilities.

The framework further encourages the development and application of AI in ways that promote social benefits, inclusive development and sustainable growth.

Organisations and individuals are urged to consider energy consumption, computing resources and environmental impacts throughout the entire lifecycle of AI systems, while prioritising technical solutions and operational processes that are energy-efficient and reduce emissions.

AI system design should also align with social ethical standards and Vietnamese cultural values, and must not produce discriminatory or prejudicial content or harm the interests of the community.

The framework will be reviewed and updated every three years, or earlier if significant changes occur in technology, legislation or governance practices. — VNS

E-paper