
Artificial intelligence is a miraculous tool—or at least that’s what millions of people who use it daily to manage multiple aspects of their lives might think, including professional and work processes. However, it’s important to consider the many risks we face, especially when using it to write code.
Don’t believe it? Let’s look at what happened in an application created entirely with AI-generated code through a prompt, as discovered by a researcher in an article published on LinkedIn.
An app created with the AI coding platform Lovable (a “vibe coding” tool that generates complete applications from prompts) was analyzed. The researcher found that a single application had 16 security vulnerabilities, six of them critical, and allowed access to data from more than 18,000 users without logging in. The exposed data included email addresses, student accounts, and records from universities and schools.
The error where the AI-generated code failed would have been easy to detect and fix if a human had been supervising: it was a flaw in the authentication code. The access logic was reversed.
- If the user was authenticated → access blocked
- If the user was not authenticated → access allowed
This allowed anyone on the internet to access all the data.
Responsible AI Software Development
What we’ve just seen makes one thing clear: artificial intelligence is a useful tool, probably one of the most important technologies to reach the software and development industry. However, it must be used properly, and that’s where the concept of responsible use comes in.
Responsible AI software development is the approach of using artificial intelligence tools to generate or assist in writing code, but with human oversight, engineering practices, and security measures that ensure the software is reliable, secure, and ethical.
In other words: it’s not just about generating code with AI, but about doing it in a secure and verifiable way.

Key Principles of Responsible AI Development at Rootstack
Human Code Review
AI-generated code should always be reviewed by a developer. Artificial intelligence can make logical mistakes, introduce vulnerabilities, or apply poor practices.
For example, an AI could write an authentication function that allows access without properly validating the user.
Security by Design
Generated code must follow security best practices such as:
- Proper authentication
- Access control
- Input validation
- SQL injection protection
- Secure credential handling
Mandatory Testing
AI-generated code should never be used without automated testing. Recommended tests include:
- Unit tests
- Integration tests
- Security testing
- Fuzzing
Transparency and Traceability
Documentation should clearly indicate which parts of the code were generated by AI, which model was used, and what changes were made by the developer. This helps audit errors or vulnerabilities and maintain clean, maintainable code.
Data Protection
Artificial intelligence must be supervised to prevent it from exposing API keys or accidentally leaking sensitive information. Likewise, confidential data should never be included in prompts used to generate code.
Continuous Monitoring
Software generated with AI should be monitored after deployment to detect bugs, security breaches, or any unexpected behavior.
Rootstack: Your Trusted Partner for AI Software Development






