The work before the work
The most important decisions happen before you write a single line of code. Your choice of project, technology, and domain determines the ultimate success or failure, whether it's going to be AI-driven or not. Considering prompt choices is way down the line, let's start with the fundamentals.
Understanding the problems you're trying to solve
Understanding your project, what problems you're trying to solve, the constraints and expectations - all of this comes before you write the first spec or the first prompt. I appreciate that might sound ridiculously simple and obvious, but from experience, it's always the basics that suffer the most and end up costing the project lots of lost time and cost in the end.
So what did we try to do, as a first AI-driven but non-trivial project?
- Web and mobile-based UI
- Backend with a relational database at its core, with 10's or interlinked entities
- Backend/frontend technologies that we're familiar with and know how to debug
- Mostly CRUD-based operations - ERPs are typically CRUD-heavy, with interactions on the simpler end of the complexity spectrum
- Targeting a market that is right for us and that we can handle
- Understanding the support requirements and that we can actually handle them ourselves
Why that's important
We didn't pick this project randomly. We picked it because it plays to a set of deliberate advantages:
Technology we know how to debug. When (not if) things go sideways, we can dig ourselves out of the ditch without waiting for an LLM to figure it out. The .NET ecosystem, PostgreSQL, EF Core - these are tools we've used for years. Even if no AI models are available tomorrow, we could continue with the project if we chose to. Though we'd rather not continue manually as this is A LOT of work. And that's not because we're lazy, it's because we have to consider the risk/reward ratio of time and money invested versus the return. But I digress...
CRUD-heavy domain. ERPs are generally considered complex because of the magnitude of different entities and their rules and interactions, not because they require considerable advances in terms of algorithmic compute or millions of parallel executions in a single second. This is great, because it allows us familiar footing and also doesn't put too much stress on the AI model to deliver novelty. We're not re-inventing the wheel, we're making a shinier one. And this is the exact scenario we're quite sure AI models excel at (due to previous testing and so on).
Web + Mobile, relational DB at the core. This has been done countless times before and it's a reliable approach, it works, there's nothing new or flashy about it. Like mentioned before, we're not inventing a new data model paradigm or building a distributed event-sourced system. We're building a newer, shinier wheel - one with all the bells and whistles that typical legacy projects in the space don't have (like live chat support built into each instance!).
A market that fits us. Small enough to reach, not saturated to the gills, we're not competing with SAP or Oracle. We know who the users are, what the typical pains are and how to get them to embrace the digital realm (scary as it may be).
A minor side-rant, but something also worth considering if you're thinking of a project: not everything is true on the internet. If you follow the major social media, the denizens there would have you believe that all software problems are solved and the whole world is already covered with it. In reality, go take a gander through your city or talk to smaller companies in your area - most of them are still in the pen and paper realm (and it's not because they like to play DnD so much). There IS opportunity in the market, but it requires good, old-fashioned hard work, talking to people, trying things our and seeing what sticks.
Why the choices matter
So why all this talk, where does it get us? Well, if you can't rescue yourself from a ditch without the LLM, you've already lost. The LLM will confidently walk you deeper into the ditch and tell you it's a feature. We know, it tried. Sometimes, we even listened.
We chose a project where:
- Our existing expertise covers the critical path
- The LLM covers the breadth we lack
- Getting stuck means "slow down", not "stop"
Bottom line
Understand the risks and advantages of your approach before you do any real work. Know the tooling and what it can and cannot reasonably support. Understand why you chose that particular tooling, what problem is it helping you solve, where you're taking it and with what architecture/approach. Move fast and adjust, learn as much as you can - but learn on real projects with proper use, not esoteric POCs that barely resemble production.
A proof-of-concept that generates a CRUD app from a prompt teaches you very little about what happens when that app has 10's of interlinked entities, a dozen background services, multi-tenant isolation, and users who will find every edge case you didn't think of. Play it smart, do it right.
Spremni izgraditi nešto značajno?
Razgovarajmo o Vašem projektu. Javite nam što trebate, odgovaramo unutar 24 sata.