top of page
ICON-Final-05_edited.png

Ida’s Insights from Tech Blueprint 2025 – How We Build AI-Ready Organizations (Gothenburg Tech Week)

  • Skribentens bild: Ida Martinsson
    Ida Martinsson
  • 21 okt.
  • 9 min läsning

When I entered World of Volvo for Tech Blueprint during Gothenburg Tech Week, I knew I would hear many perspectives on artificial intelligence. But I didn’t expect the day to leave me with so many reflections on people. Because behind all the technology, models, and algorithms lies a more fundamental question: how ready are we, really, to work with AI in practice?


Now, I’d like to share my key insights from this full day dedicated to AI.


Tech Blue print @ Gothenburg Techweek. Panel discussion.
Tech Blue print @ Gothenburg Techweek. Panel discussion.

Lay of the Land – The Will Is There, but the Structure Is Missing

Several of the opening presentations revealed the same pattern: we have great ambitions, but our approach is fragmented. EY shared figures that are still echoing in my mind – the vast majority of major AI investments fail. They referred to a recent report from MIT Sloan Management Review, showing that while almost all large companies have started their AI journey, only about five percent manage to achieve a measurable business impact.

EY described this as a clear gap between ambition and execution – many are experimenting with AI, but few succeed in moving from pilot projects to true transformation. It’s rarely the technology that fails; it’s the structure. Maturity isn’t about more tools or larger datasets – it’s about the ability to learn, adapt, and scale in sync with organizational change.


I found it interesting that several speakers throughout the day returned to the same theme. Many emphasized the importance of the right data rather than more data – quality over quantity. Others, including representatives from Recorded Future, highlighted the need to trace the origin of data and understand its credibility. We build AI on the information we feed it, and when that information is weak, the intelligence will be too.


Listening to these discussions, it struck me that this is ultimately about something bigger than technology: it’s about structure, accountability, and learning. About creating systems that don’t just respond — but understand.


Building the Foundation – Start Small, but Make It Real


A recurring phrase throughout the day was “Don’t boil the ocean.” In other words: don’t try to solve everything at once. Several speakers shared stories about companies that lost their way by trying to build comprehensive AI strategies before even running their first experiment. It made me think about how often AI work gets stuck in endless planning phases — when it should really begin small.


Starting small doesn’t mean starting cautiously. It means doing something real — at a manageable scale, but with real data and real users. Another wise insight concerned external solutions: use them to learn faster. It’s often better to test an existing solution and see what it actually delivers than to build your own before you even know what problem you’re trying to solve.


Checklist: How to Build a Sustainable AI Foundation

  • Start with business value, not technology.

  • Define the problem you actually want to solve.

  • Ensure data quality and understand your data sources.

  • Run small, sharp tests with real users.

  • Let governance evolve from experience — not the other way around.


ROI was another recurring topic. Some argued that you shouldn’t calculate returns for every single project, but rather look at the portfolio as a whole. Two successful initiatives out of twenty may be enough to drive real change. It’s a reminder that learning itself is a form of return — even if it doesn’t show up in the numbers.


From Pilot to Scalable – When Culture and Pace Make the Difference

When the conversation turned to how AI actually scales within organizations, it became clear just how much it all comes down to culture. For experiments to evolve into real business impact, organizations must both allow and encourage them. Speakers emphasized the need for a sense of urgency and the importance of identifying the right stakeholders. Several highlighted the role of early adopters — those who dare to test first and inspire others to follow.


There was one phrase that stuck with me: AI scales when culture scales. If an organization lacks a culture of experimentation, reflection, and learning, then no amount of strategy will make a difference.


One of the most striking points made during the day was how misplaced the focus often is in today’s AI initiatives. A speaker showed a simple but powerful comparison: ideally, we should allocate 10% of resources to algorithms, 20% to technology, and 70% to people and processes — because that’s where real scalability lives. In practice, however, most companies do the opposite: they spend up to 90% on algorithms and technology, and almost nothing on the people who are meant to work with them. That’s precisely why so many AI initiatives get stuck in the pilot phase.


Four Success Factors for Scaling AI

  1. Involve the people who actually own the problems – not just IT.

  2. Create small wins that demonstrate value quickly.

  3. Collaborate with startups to accelerate progress.

  4. Make learning an outcome – not a byproduct.


I also liked the idea that companies should collaborate more with startups. Large organizations have the resources, while smaller ones have the pace. The combination can create a learning environment that benefits both.


A New Perspective – AI as a Colleague

One of the most concrete ideas I heard during the day was that we need to start seeing AI as an employee rather than a tool. That shift changes everything. If we think of AI as a colleague, it means we onboard it, train it, follow up, and evaluate its performance. We lead it, rather than just use it*.

This mindset reflects something I often see in my own work: organizations that treat AI as a system often fail to unlock its real value. Those that see it as a capability they themselves must develop, succeed.


One company demonstrated how they use prompting to generate insights directly from their own data. Decision-makers no longer need to send long chains of questions up the hierarchy — they can get answers in real time. To me, that’s a clear sign that AI is starting to become part of the decision-making process itself, not just an add-on to it.


Real ROI – Value in Multiple Dimensions

The ROI discussions sparked a wide range of perspectives. One person put it simply: if you want financial return, AI must help you either make more money or spend less. Efficiency on its own isn’t ROI unless it translates into real, tangible value.

At the same time, a representative from Volvo Cars described how they view ROI quite differently. For them, a successful AI investment might mean a safer car — perhaps even a life saved. That’s hard to quantify in money, but its importance is unmistakable.


It made me reflect on how narrowly we often define return. In a broader sense, it’s also about trust, safety, and the speed of learning. The organizations that learn the fastest will ultimately be the ones that stand the strongest.


Three Ways to Look at ROI

  1. Financial ROI – Costs, Revenue, EfficiencyThe traditional lens of return: how AI helps the organization save money, increase income, or streamline operations. It’s about measurable results — improved margins, productivity gains, or reduced waste.

  2. Strategic ROI – Security, Trust, Risk ReductionHere, value is measured in stability and resilience. AI that strengthens cybersecurity, enhances decision accuracy, or builds stakeholder trust delivers long-term strategic advantages that go beyond quarterly numbers.

  3. Cultural ROI – Learning Speed and AdaptabilityPerhaps the most transformative form of return. When AI accelerates learning and drives openness to change, the organization evolves faster than its competitors. Cultural ROI ensures that every new technology becomes a catalyst for continuous improvement — not resistance.

One of many panel discussions during Tech Blueprint.
One of many panel discussions during Tech Blueprint.

AI and Working Life – The Fear and the Direction

One of the most human moments of the day centered on the fear of being replaced. Someone put it bluntly: if people feel threatened by AI, they won’t join the journey. Change requires participation. When employees are given new roles and see that they bring new value, the fear begins to fade.


At the same time, more existential questions emerged — like whether, in the future, we might pay taxes for AI agents. It may sound far-fetched, but it shows that we’re already beginning to prepare mentally for a world where work and value creation are no longer the same thing.


I believe that leadership’s most important task right now is to create security and direction. Not to promise that everything will remain the same, but to help people understand why change is necessary — and how they themselves are part of it.


AI Takes Shape – When Learning Comes to Life

One of the most fascinating parts of the day focused on robotics and what might be called “true intelligence.” The speaker from IntuiCell used the example of a newborn giraffe learning to stand and walk within hours. That kind of learning — understanding the world in real time — is something today’s AI still lacks. Our systems are pre-trained and can only operate effectively in environments they already know.


They argued that we may have built AI the wrong way from the start. Perhaps what we need isn’t more data, but AI that can understand and adapt to what it hasn’t seen before. We train our models on what has already happened, but the future hasn’t happened yet. To create true intelligence, we must give our systems the ability to learn while they live, not just while they’re being trained.


The Narrative Decides – The Image of AI Shapes Reality

Toward the end of the day, one speaker discussed how the way we talk about AI directly shapes its development. If we describe AI as a threat, we will inevitably create regulations and systems built on fear. But if we see it instead as a tool for solving problems, we’ll develop it in ways that foster innovation and progress.


I especially liked the comparison to a self-driving car stuck in a roundabout because it can’t interpret a solid line on the road. That image feels far more accurate than the dystopian narrative of robots taking over the world. AI is neither good nor evil by nature. Ethics live in us — the creators — not in the technology itself.


From Technology to Meaning – AI Across All Industries

One example that caught my attention was Icebug, the footwear company using AI to enhance its processes. It’s a great reminder that AI isn’t reserved for tech companies. On the contrary — those who learn to use AI as a tool for improvement will gain a competitive edge, no matter their industry.


Several speakers emphasized starting from business value and working backward. Governance and structure will evolve over time, but you have to begin with something real. I found that perspective refreshing at a time when so many organizations are still stuck in analysis instead of action.


Toward the end of the day, the discussion turned to branding and authenticity. It was highlighted that you can’t “AI-generate” your way to trust — people can tell when something is genuine. Storytelling, transparency, and authenticity will become even more important going forward. AI can automate many things, but it can’t automate meaning.


My Conclusions


When I sum up the day, I see three clear threads emerging.

First: AI maturity starts in culture, not in code.

Second: Small, real experiments are the key to scalability.

Third: The winners of the future will be those who learn the fastest — both humans and machines.


An Added Reflection – How It All Connects


Looking back on the entire day at Tech Blueprint, I realize that every perspective — whether it was statistics on failed AI projects, discussions about culture and learning, reflections on ethics, the exploration of biological learning in robotics, or the conversation about brand authenticity — all revolved around the same central theme: how we create meaning in an age of machines.


The technology is impressive, but it’s our decisions that determine the direction. When we talk about culture and mindset, we’re really talking about responsibility. When we discuss ROI, it ultimately comes down to values. And when we explore robotics and true intelligence, it’s about understanding — about trying to build something that shares our ability to learn, not just to compute.


To me, it’s clear that successful AI initiatives don’t emerge from choosing the right model or tool, but from people, processes, and technology growing together. AI maturity is, in essence, organizational maturity — a balance of structure, curiosity, and courage.


I also believe we’re entering a new era of responsibility — as leaders, employees, and as a society. To see AI as more than a tool for efficiency, but as a mirror of our collective thinking. Every decision we make about how we train, use, and talk about AI shapes not only the technology itself, but also our own capacity to understand the future.


I’m leaving Tech Blueprint with more questions, ideas, and future reflections than I arrived with — many of them about how AI can become a genuine force for progress. It’s a topic I’ll continue to explore in future posts.


Author
Author

– Ida Martinsson, Sales & Marketing Manager, Cyber Instinct



Speakers and topics can be explored in more detail here.


If you’re looking for your next AI solution, I recommend starting with connecting with UltraDefy.

 
 
bottom of page