Security researchers have raised serious concerns about AI-Created Apps after discovering thousands of exposed web applications. These apps were built using AI coding platforms but lacked proper security protections.
The study found more than 5,000 publicly accessible applications. Many of these AI-Created Apps had weak or missing authentication systems. This made sensitive data easily accessible online.
The research was carried out by Dor Zvi and his team at RedAccess. They examined applications built using AI development tools such as Lovable, Replit, Base44, and Netlify.
According to the findings, some AI-Created Apps allowed open access without login requirements. Others only required simple email verification, which was not enough to secure data.
Researchers said around 40% of the apps exposed sensitive information. This included medical records, financial data, business documents, and internal corporate files.
In several cases, chatbot logs containing customer names and contact details were also exposed. Some applications even included administrative access features that were not properly secured.
This allowed users to potentially delete administrators or take full control of systems. Experts say this creates a high security risk for businesses and users.
The report also noted that many AI-Created Apps were publicly indexed through search engines like Google and Bing. This happened because AI platforms often host apps on shared domains.
Researchers also found phishing websites hosted on AI platform domains. These fake sites copied well-known brands, including Bank of America, FedEx, Costco, Trader Joe’s, and McDonald’s.
Security experts warn that the rise of AI-based development is changing how software is built. Employees can now create and launch apps without traditional security reviews.
This fast development process increases the risk of data leaks. Many organizations may not even know their information is publicly exposed.
The report highlights growing concerns around the safety of AI-Created Apps. Experts say stronger security rules are needed for AI development platforms.
In other related news also read New Study Warns of 30% Higher Depression With Daily AI Chatbots
They also recommend better monitoring and testing before deployment. Without these measures, sensitive data could continue to be exposed online through poorly secured applications.





