In a world where security cameras are everywhere but finding specific footage can feel like searching for a needle in a haystack, a new AI-powered solution is emerging. Conntour, a video surveillance startup, just secured $7 million in seed funding from prominent investors including General Catalyst and Y Combinator to build what it calls a “Google-like search engine for security video systems.” But this isn’t just another tech funding story – it’s unfolding against a backdrop of intensifying debates about surveillance ethics, AI regulation, and the practical challenges of making AI work efficiently at scale.
The AI Search Engine for Security Footage
Conntour’s platform uses natural language and vision-language models to let security personnel query camera feeds using everyday language. Instead of manually scrubbing through hours of footage, users can ask questions like “Find instances of someone in sneakers passing a bag in the lobby” and get relevant results in real-time. The system can monitor up to 50 camera feeds off a single consumer GPU like Nvidia’s RTX 4090, making it surprisingly scalable for a startup solution.
CEO Matan Goldner told TechCrunch that the company is selective about its clients, working only with organizations whose use cases align with their ethical standards. “We’re really in control of who is using it, what is the use case, and we can select what we think is moral and, of course, legal,” Goldner said. This approach comes as the surveillance industry faces increased scrutiny, with recent controversies involving ICE tapping into camera networks and Ring enabling law enforcement requests for neighborhood footage.
The Efficiency Challenge: AI’s Scalability Problem
Goldner identifies a fundamental contradiction at the heart of AI surveillance systems: “We have two things that we want to do at the same time, and they contradict each other. On one hand, we want to provide full natural language flexibility, LLM-style, to let you ask anything. And on the other hand there’s efficiency, so we want to make it use very few resources.” This tension between capability and efficiency isn’t unique to Conntour – it’s a challenge facing AI applications across industries.
Interestingly, this efficiency challenge is being addressed from multiple angles in the tech world. Radxa recently introduced the AICore DX-M1M, an AI accelerator module designed for maker projects that offers up to 25 Tops (trillion operations per second) while consuming only about 3 watts of power. This kind of hardware innovation could make AI-powered surveillance systems more accessible and energy-efficient, potentially addressing concerns about the environmental impact of data-intensive AI applications.
The Regulatory Landscape: Growing Calls for AI Oversight
As AI surveillance technology advances, regulatory pressure is mounting. Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez recently proposed legislation to ban the construction of new data centers with peak power loads exceeding 20 megawatts until Congress enacts comprehensive AI regulation. The bill cites concerns about environmental impacts and AI-driven job displacement, reflecting growing public apprehension about unchecked AI development.
A Pew Research poll cited in the legislation shows that most Americans are more concerned than excited about AI, with just 10% saying their excitement outweighs their concern. This public sentiment creates a challenging environment for AI startups like Conntour, which must navigate both technical hurdles and increasing regulatory scrutiny.
Cybersecurity Context: The Need for Better Security Tools
The timing of Conntour’s funding round coincides with alarming trends in cybersecurity. According to a Mandiant report, cyberattacks are accelerating dramatically, with time-to-hand-off dropping from over 8 hours in 2022 to just 22 seconds in 2025. Mean time to exploit vulnerabilities has fallen to just 7 days before patches are available, creating a narrow window for organizations to protect themselves.
Mandiant researchers noted that “despite these rapid technological advancements, we do not consider 2025 to be the year where breaches were the direct result of AI. From our view on the frontlines, the vast majority of successful intrusions still stem from fundamental human and systemic failures.” This context makes AI-powered security tools potentially valuable, but also raises questions about whether they address the right problems.
The Bigger Picture: AI’s Role in Security and Society
Conntour’s story reflects broader trends in how AI is transforming security and surveillance. The company’s approach – combining natural language processing with video analysis – represents a shift from rule-based systems to more flexible, intelligent solutions. But as these technologies become more powerful, questions about their appropriate use become more urgent.
The surveillance industry stands at a crossroads, balancing the potential benefits of AI-powered security tools against concerns about privacy, ethics, and regulation. Conntour’s selective client approach represents one response to these challenges, but the broader industry will need to develop clearer standards and practices as AI surveillance capabilities continue to advance.
What’s clear is that the conversation about AI in security is moving beyond simple questions of capability to more complex considerations of efficiency, ethics, and regulation. As Goldner put it, the biggest challenge isn’t just building powerful AI systems – it’s building systems that are both powerful and practical in real-world applications.

