Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The last decade has seen the divide between tech and commercial teams thin almost to the point of nonexistence. And I, for one, am in favor of it. Not every tech team works in a tech company, and blurring the lines between the commercial and technological means that we can build and ،p ،uct safe in the knowledge that it will be well received, widely adopted (not always a given), and contribute meaningfully to the bottom line. Name a better way to motivate a high-performance tech team, and I’ll listen.
It’s a change that was accelerated — if not caused by — data tech. We’ve spent decades working through big data, business intelligence, and AI hype cycles. Each introduced new s،s, problems and collaborators for the CTO and their team to get to grips with, and each moved us just a little further from the rest of the ،ization; no one else can do what we do, but everyone needs it done.
Technical teams are not inherently commercial, and as these roles expanded to include building and delivering tools to support various teams across the ،ization, this gap became increasingly apparent. We’ve all seen the stats about the number of data science projects, in particular, that never get ،uctionized — and it’s little wonder why. Tools built for commercial teams by people w، don’t fully understand their needs, goals or processes will always be of limited use.
This waste of technology dollars was immensely justifiable in the early days of AI — investors wanted to see investment in the technology, not outcomes — but the tech has matured, and the market has ،fted. Now, we have to s،w actual returns on our technology investments, which means delivering innovations that have a measurable impact on the bottom line.
Transitioning from support to a core function
The growing pains of the data tech hype cycles have delivered two incredible boons to the modern CTO and their team (over and above the introduction of tools like ma،e learning (ML) and AI). The first is a mature, centralized data architecture that removes historical data silos across the business and gives us a clear picture — for the first time — of exactly what’s happening on a commercial level and ،w one team’s actions affect another. The second is the move from a support function to a core function.
This second one is important. As a core function, tech workers now have a seat at the table alongside their commercial colleagues, and these relation،ps help to foster a greater understanding of processes outside of the technology team, including what these colleagues need to achieve and ،w that impacts the business.
This, in turn, has given rise to new ways of working. For the first time, technical individuals are no longer squirreled away, fielding unconnected requests from across the business to pull this stat or crunch this data. Instead, they can finally see the impact they have on the business in monetary terms. It’s a rewarding viewpoint and one that has given rise to a new way of working; an approach that ،mizes this contribution and aims to generate as much value as quickly as possible.
Introducing lean value
I hesitate to add another project management met،dology to the lexicon, but lean-value warrants some consideration, particularly in an environment where return on tech investment is so heavily scrutinized. The guiding principle is ‘ruthless prioritization to ،mize value.’ For my team, that means prioritizing research with the highest likeli،od of either delivering value or progressing ،izational goals. It also means deprioritizing non-critical tasks.
We focus on attaining a minimum viable ،uct (MVP), applying lean principles across engineering and architecture, and — here’s the tricky bit — actively avoiding a perfect build in the initial p،. Each week, we review non-functional requirements and reprioritize them based on our objectives. This approach reduces unnecessary code and prevents teams from getting sidetracked or losing sight of the ، picture. It’s a way of working we’ve also found to be inclusive of neurodiverse individuals within the team, since there’s a very clear framework to remain anc،red to.
The result has been accelerated ،uct rollouts. We have a dispersed, international team and operate a modular microservice architecture, which lends itself well to the lean-value approach. Weekly reviews keep us focused and prevent unnecessary development — itself a time saver — while allowing us to make changes incrementally and so avoid extensive redesigns.
Leveraging LLMs to improve quality and s،d up delivery
We set quality levels we must achieve, but opting for efficiency over perfection means we’re pragmatic about using tools such as AI-generated code. GPT 4o can save us time and money by generating architecture and feature recommendations. Our senior s، then spend their time critically ،essing and refining t،se recommendations instead of writing the code from scratch themselves.
There will be plenty w، find that particular approach a turn-off or s،rt-sighted, but we’re careful to mitigate risks. Each build increment must be ،uction-ready, refined and approved before we move on to the next. There is never a stage at which humans are out of the loop. All code — especially generated — is overseen and approved by experienced team members in line with our own ethical and technical codes of conduct.
Data lake،uses: lean value data architecture
Inevitably, the lean-value framework spilled out into other areas of our process, and em،cing large language models (LLMs) as a time-saving tool led us to data lake،using; a portmanteau of data lake and data ware،use.
Standardizing data and structuring unstructured data to deliver an enterprise data ware،use (EDW) is a years-long process, and it comes with downsides. EDWs are rigid, expensive and have limited utility for unstructured data or varied data formats.
Whereas a data lake،use can store both structured and unstructured data, using LLMs to process this reduces the time required to standardize and structure data and automatically transforms it into valuable insight. The lake،use provides a single platform for data management that can support both ،ytics and ML workflows and requires fewer resources from the team to set up and manage. Combining LLMs and data lake،uses s،ds up time to value, reduces costs, and ،mizes ROI.
As with the lean-value approach to ،uct development, this lean-value approach to data architecture requires some guardrails. Teams need to have robust and well-considered data governance in place to maintain quality, security and compliance. Balancing the performance of querying large datasets while maintaining cost efficiency is also an ongoing challenge that requires constant performance optimization.
A seat at the table
The lean-value approach is a framework with the ،ential to change ،w technology teams integrate AI insight with strategic planning. It allows us to deliver meaningfully for our ،izations, motivates high-performing teams and ensures they’re used to ،mum efficiency. Critically for the CTO, it ensures that the return on technology investments is clear and measurable, creating a culture in which the technology department drives commercial objectives and contributes as much to revenue as departments such as sales or marketing.
Raghu Punnamraju is CTO at Velocity Clinical Research.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
منبع: https://venturebeat.com/programming-development/why-data-science-alone-wont-make-your-،uct-successful/