Tuesday, April 1, 2025

Vibe Coding

 


M 917 536 3378

maksim_kozyarchuk@yahoo.com





What is Vibe Coding?

Recently, I came across the term vibe coding, and it immediately resonated. The work I’ve been doing with Kupala-Nich over the past nine months fits that description perfectly. I started this project because I wanted to build a feature-rich valuation and risk platform—something robust, extensible, and available to everyone. There’s something deeply satisfying about having an idea for how something should work and then seeing it become real. Whether it’s a concept I came up with, a recommendation from a friend, or a request from a client, the act of turning an abstract idea into a working feature is what drives me. This is why I code,  it’s where I find joy and fulfillment in the creative process.

But vibe coding is more than just building features. It’s also about how you build. It’s about working in a rhythm that feels natural, balancing creativity and engineering discipline. For me, it also means investing in everything that makes the platform not just functional, but sustainable over the long term. In the sections that follow, I’ll explore the other practices refactoring, research, testing, automation, and platform engineering that complete the picture and make this a holistic experience.



Technical Research: Staying Curious

One of the greatest benefits of working without externally imposed deadlines is the freedom to pause feature work and explore new ideas, practices, or technologies. Recently, I’ve found myself diving into something new almost every other week—not because I have to, but because I genuinely want to understand how these innovations might fit into the future of Kupala-Nich.

Some of these explorations remain curiosities with no immediate application (like the Cursor editor or AWS API Marketplace). Others quickly become valuable tools I use every day—such as VS Code Insiders with GitHub Copilot, or CloudWatch-based monitoring and alerting. Some even lead to unexpected and interesting new connections (like Matalogica AADC). And occasionally, an exploration ends up playing a pivotal role in the platform’s evolution—DynamoDB, PyCaret, and AWS CDK are great examples.

It’s hard to assign a precise ROI to this type of research, but I’ve noticed a consistent pattern: I spend less than 20% of my time on technical exploration, yet it contributes more than 20% to the long-term capability and sustainability of the platform


Refactoring: Clearing the Path

Refactoring is not my favorite activity. Pair that with the fact that it often gets in the way of shipping a feature on time, and it’s easy to see why it’s so frequently postponed. But over time, I’ve come to see refactoring as something like cleaning or reorganizing your living space: you can avoid it for a while, but eventually it starts affecting your mood. You begin tripping over things, or you end up buying replacements for items you already own but can’t find. The same chaos can creep into a codebase.

With Kupala-Nich, I’ve learned to treat refactoring as part of the natural rhythm of development. Sometimes I do it because I wake up with a better idea for how something should be structured. Sometimes it’s a promise I made to myself—“I’ll refactor this once the feature is done.” But more often, it happens when I start building a new feature and realize the current structure is holding me back. At that point, deadlines take a back seat, and I’ll spend a few days redesigning or reorganizing the codebase so the new functionality can live in a better place.

The best part of refactoring is how it makes you feel afterwards. There’s this lightness, like a weight has been lifted. Code that felt messy or claustrophobic is suddenly clean and breathable. And features that once seemed intimidating start to feel exciting again. It’s a reboot—not just for the project, but for your motivation.



Automated Testing: Reducing Coding Anxiety

I write tests because I don’t like the uncertainty that comes with changing code. Introducing a new feature shouldn't feel like a risk, and a good test suite makes development feel safer and more predictable. Tests also improve speed, especially when working with AWS serverless infrastructure, where deploying and debugging can take time. Writing quick, focused tests is often more efficient than deploying just to verify behavior.

My test coverage isn’t exhaustive, nor do I aim for 100%. Instead, I focus on writing low-level tests that cover important functionality without being brittle. This balance ensures that even large structural changes typically impact only a small number of tests, usually less than 2%. I put effort into making tests reliable, isolated, and consistent so they continue to add value over time.

Maintaining tests is just as important as writing them. When APIs evolve, I regularly revisit, refactor, or remove tests to keep the suite clean and relevant. I prefer using stubs over mocks for stability, and I like validating real data interactions wherever possible. For AWS-specific components, I rely on the Moto library to simulate aws behavior. While it doesn’t fully support every service, like API Gateway, it’s proven to be effective for the majority of my testing needs.



Security, Automation, and Platform Engineering

Building scalable and robust applications involves a level of complexity that goes far beyond individual features or algorithms. It requires provisioning infrastructure, installing the right operating systems and dependencies, configuring middleware such as databases, API gateways, and message buses, and ensuring that every component is secure, available, and correctly integrated.

Security brings its own set of responsibilities: setting up user authentication, managing access controls, provisioning certificates, and defining permissions for each resource. Frankly, managing these configurations, especially through web UIs, is rarely the highlight of my day. Manual processes are error-prone and difficult to maintain, which is why I’ve leaned heavily on infrastructure-as-code solutions.

Docker, GitLab, AWS CloudFormation, CDK have been essential tools in addressing this complexity. They allow me to automate nearly all aspects of infrastructure and security configuration, using Python or YAML definitions. This approach ensures repeatability, clarity, and version control for the platform’s architecture.

Learning these tools has been its own ongoing research project. Fortunately, with resources like ChatGPT and strong documentation from AWS, it’s easier than ever to navigate the many options and best practices available. While the configuration layer of platform engineering can be daunting, it becomes much more manageable, and even enjoyable, when approached with the right tools and mindset


No comments: