AI's Creative Promise Meets Real-World Limits: From Sketchnotes to Security Risks

Summary: Testing Google's Nano Banana 2 AI for creating sketchnotes reveals persistent challenges with consistency and accuracy, requiring multiple iterations for usable results. Meanwhile, AI tools like Gemini Pro help identify critical security vulnerabilities in industrial systems, demonstrating both promise and risks. Manufacturing experts warn that physical infrastructure often lags behind AI capabilities, creating implementation gaps. These cases collectively show that AI's real-world value depends heavily on human guidance, context-aware integration, and recognizing both creative potential and practical limitations.

Imagine asking an AI to create a visual summary of the U.S. Bill of Rights, only to receive a diagram where Roman and Arabic numerals battle for supremacy, numbers duplicate themselves, and articles appear in random order. This isn’t a hypothetical scenario – it’s exactly what happened when ZDNET’s David Gewirtz tested Google’s Nano Banana 2 AI image generator for creating sketchnotes. His experience reveals both the remarkable potential and frustrating limitations of today’s AI tools, raising critical questions about their real-world reliability.

The Sketchnote Experiment: Six Tries to Perfection

Gewirtz, a self-described “graphophile” who loves charts and diagrams, put Nano Banana 2 through rigorous testing. Starting with a simple prompt to create a sketchnote of the Bill of Rights, he encountered persistent issues: mixed numbering systems, duplicated elements, and incorrect ordering. It took six iterations with increasingly specific instructions to finally achieve a usable result. “The AI got the title highlighting right, but still couldn’t handle the order,” he noted, highlighting the gap between AI’s creative potential and its practical execution.

When testing with his own articles about AI coding techniques, the problems escalated. The AI generated nonsensical text like “ADIUK SALIRE BAT DIANCIORE” and showed gender stereotyping – representing “the maker” as male and using “seamstress” instead of the gender-neutral “sewist” from the original article. These issues point to deeper challenges in AI training data and model consistency that go beyond simple technical glitches.

Beyond Creative Tools: AI’s Security Implications

While creative applications capture headlines, AI’s role in cybersecurity presents more urgent concerns. In a recent case documented by German IT security firm Jakkaru, researchers used Google’s Gemini Pro AI to analyze disassembled C-code from APsystems solar micro-inverters. The AI helped identify critical vulnerabilities that could have allowed attackers to install malicious firmware on approximately 100,000 devices. This discovery wasn’t theoretical – it revealed how AI-assisted analysis could enable attacks that might destabilize power grids through mass device shutdowns.

The APsystems case demonstrates AI’s dual nature in security contexts. While the technology helped researchers identify vulnerabilities faster, the same capabilities could be weaponized by malicious actors. As one researcher noted, “The AI helped make the connection process easily understandable,” but this understanding could cut both ways. The vulnerabilities have since been patched, but the incident underscores how AI tools are becoming essential in both offensive and defensive cybersecurity operations.

The Manufacturing Challenge: When AI Meets Physical Reality

Even as AI capabilities advance, integrating them into real-world operations presents significant hurdles. According to Asad Afzal, global director of transformation at A-Safe, “Most facilities weren’t built for the level of automation AI now supports.” His observation highlights a critical disconnect: while AI can optimize workflows digitally, it “cannot fix a layout that has existing friction” in physical spaces.

A PwC survey reveals that automation is expected to more than double across manufacturing by 2030, but companies struggle with poor data quality, skills gaps, and fragmented systems. Ryan Hawk, a PwC industrials leader, notes that “as automation becomes ubiquitous, the advantage shifts from who has tools to who can orchestrate them across the enterprise.” This suggests that successful AI implementation requires more than just technology – it demands organizational readiness and strategic integration.

The Human Factor: From Creative Frustration to Real Consequences

Gewirtz’s sketchnote experiments reveal a fundamental truth about current AI tools: they require significant human guidance and iteration. His final recommendation – “Expect to revise the sketchnote repeatedly to get it right” – applies broadly across AI applications. Whether creating visual summaries or analyzing security vulnerabilities, AI tools function best as collaborators rather than autonomous solutions.

This collaborative dynamic raises important questions about AI’s role in professional settings. As Gewirtz discovered through six iterations of prompts and corrections, achieving quality results requires both technical understanding and persistent refinement. The same principle applies in manufacturing, where Afzal warns that “the real risk sits in that gap between digital capability and physical readiness.”

Looking Forward: Balancing Promise with Practicality

The experiences with Nano Banana 2 for creative work, Gemini Pro for security analysis, and manufacturing automation challenges all point to a common theme: AI’s value depends heavily on context and implementation. While Google promotes Nano Banana 2’s improvements in text rendering and consistency, real-world testing shows these advances still require careful human oversight.

As AI tools become more sophisticated, their limitations become more nuanced. The numbering errors in sketchnotes, the security vulnerabilities revealed through AI-assisted analysis, and the physical constraints in manufacturing all demonstrate that AI excellence requires more than just better algorithms – it demands better integration with human workflows, physical environments, and security protocols. The future of AI may depend less on what these tools can do autonomously and more on how effectively they can augment human expertise across diverse professional domains.

Found this article insightful? Share it and spark a discussion that matters!

Latest Articles