Team of professionals

Back to all news

CloudGuard by Grow2FIT

At Grow2FIT, we offer bespoke solutions tailored for businesses of all sizes, from startups to enterprises. Our dedication is to ensure your cloud infrastructure always performs at its best. Backed by our team of seasoned experts, we pledge continuous monitoring, proactive upkeep, strategic cost optimization, and agile enhancements for an efficient, cost-effective cloud ecosystem.

Basic Package Features

Choose CloudGuard basic package for peace of mind, knowing that your cloud infrastructure is under expert watch. And when you’re ready to delve deeper into optimization and strategic planning, our advanced services are just a call away.

Price: Ranges from 500€ – 1000€ monthly (excluding VAT). The final quotation is contingent on the intricacy and magnitude of your infrastructure.

Additional Details:

  • Service hours: 5*8
  • SLA: Best effort

Additional Services

Bodyguards of your cloud

Tomáš Čorej
Grow2FIT Cloud & DevOps Consultant

Tomáš has 15 years of experience in designing and building high-performance and cost-effective solutions for automation of the maintenance of physical servers. He prefers to use commodity hardware and open-source tools such as, OpenStack, Terraform, Juju or Ceph. At the same time, he has extensive experience in the integration of open-source tools into the startup and corporate environments and operation of on-premise, cloud and hybrid solutions.

Kamil Madáč
Grow2FIT Cloud & DevOps Consultant

Kamil is a Senior Cloud / Infrastructure consultant with 20+ years of experience and strong know-how in designing, implementing, and administering private cloud solutions (primarily built on OpenSource solutions such as OpenStack). He has many years of experience with application development in Python and currently also with development in Go. Kamil has substantial know-how in SDS (Software-defined storages), SDN (Software-defined networking), Data Storages (Ceph, NetApp), administration of Linux servers and operation of deployed solutions. Kamil is a regular contributor to OpenSource projects (OpenStack, Kuryr, Requests Lib – Python).

Petr Drastil
Grow2FIT Cloud & DevOps Consultant

DevOps Consultant and Architect with previous experience in software development focusing on design and implementation of IaaS and PaaS solutions in the cloud (AWS, Azure) and Kubernetes. Petr has worked on multiple projects that delivered standardised tooling used by developers to break legacy monolithic solutions into separate services with an independent lifecycle. He is also experienced in shifting applications from dedicated servers to the Kubernetes / Red Hat OpenShift platform. Petr is experienced in the finance (Deutsche Börse), telco (Deutsche Telekom) and e-commerce (Wallmart Global Tech) sectors.

And many others… The entire Grow2FIT consulting team: Our team


Case Studies

Contact us

Team of professionals

Back to all news

How many software development environments are needed and why?

In software engineering, a "software development environment" refers to a combination of processes, tools, and infrastructure that developers use to design, create, test, and maintain software. This includes everything from Integrated Development Environments (IDEs), such as Visual Studio, Eclipse, and IntelliJ, to foundational tools and libraries and even broader components like databases, servers, and network setups. Simply said, it denotes a particular set of infrastructure resources set up to execute a program under specific conditions.

As software advances through its life cycle, different environments address the unique requirements of the Development and Operations teams. Given today’s rapid and competitive digital business setting, development teams must fine-tune their workflows to stay ahead. An efficient workflow enhances team productivity and guarantees the delivery of prompt and reliable software.

Benefits of Harnessing Multiple Environments

Parallel Development

Software development often resembles balancing multiple tasks at once. While introducing new features, it’s vital not to disrupt a live application, potentially bringing in bugs, performance issues, or security vulnerabilities. While one part of the team might be fully occupied by crafting fresh features, another could be refining an existing version based on feedback from testing. Having segregated environments enables teams to work on different tasks without stepping on each other’s toes.

Enhanced Security

Limiting access to production data is crucial. By distributing data across various environments, we strengthen the security of production data and preserve its integrity. This reduces the chance of unintentional modifications to the live data during development or testing phases.

Minimized Application Downtime

These days, application stability and uptime are more crucial than ever. Customers expect and rely on consistent service availability. Repetitive disruptions might lead to losing a company’s reputation. By cultivating multiple environments and establishing rigorous testing, we position ourselves to launch robust and reliable software.

Efficient Hotfix Deployment

There are moments when a quick fix or enhancement must be rolled out with great speed. For such instances, having an environment that mirrors production closely and is free from ongoing feature development is invaluable. This dedicated environment facilitates quick feature or fix deployment, followed by testing, before a seamless transition to live production.

An In-Depth Look at Development Environments

As software evolves from an idea to a full-fledged application, it passes through various stages, each with its unique set of tools, protocols, and objectives. These stages, or environments, form the backbone of the development lifecycle, ensuring that software is crafted, refined, tested, and deployed precisely.

Local Development Environment

The initial stage of software development occurs in the local development environment. It acts as the primary workspace where developers initiate the coding process, often directly on their personal computers with a distinct project version. This setting allows a developer to construct application features without interference with other ongoing developments. While this environment is suitable for running unit and integration tests (with mock external services), end-to-end tests are typically less frequent. Developers commonly employ Integrated Development Environments (IDEs), software platforms offering an extensive suite of coding, compiling, testing, and debugging tools.

Integration Environment

At this stage in the development process, developers aim to merge their code into a team’s codebase. With many developers or teams working independently, conflicts and test failures can naturally arise during this integration. In expansive projects, where multiple teams focus on distinct segments (or microservices), the integration environment becomes the critical platform where all these separate functionalities come together. Additionally, integration tests may be adjusted here to ensure application stability. Different implementations of multiple teams (like API integration point adjustments) can often originate from the initial analysis stage. Furthermore, the challenge of locally developing cloud-native features emphasizes the integration environment’s essential role, highlighting distinctions between local setups and actual cloud operations.

Test Environment

Also known as the quality assurance environment, it employs rigorous tests to evaluate individual features and the application’s overall functionality. Tests range from internal service interactions (integration tests) to all-inclusive tests, including internal and external services (end-to-end tests). Typically, the test environment doesn’t demand the extensive infrastructure of a production setting. The primary goal is to ensure the software meets specifications and sort out any defects before they reach production. Organizations might optimize processes by combining the integration and test environments, facilitating simultaneous initial integration and testing.

Staging Environment

The staging or pre-production environment aims to simulate the production environment regarding resource allocation, computational demands, hardware specifications, and overall architecture. This simulation ensures the application’s readiness to handle expected production workloads. Organizations sometimes opt for a soft launch phase, where the software goes through internal use before its full-scale production deployment. Access to the staging environment is typically limited to specific individuals like stakeholders, sponsors, testers, or developers working on imminent production patches. This environment’s closeness to the actual production setting makes it the go-to for urgent fixes, which, once tested here, can swiftly be promoted to production.

Production Environment

The production environment refers to the final and live phase providing end-user access. This setup includes hardware and software components like databases, servers, APIs, and other external services, all scaled for real-world use. The infrastructure in the production environment must be prepared to handle large volumes of traffic, cyber threats, or hardware malfunctions.

Other Environments

The specific needs of an application, the scale of the project, or business requirements may necessitate the introduction of additional environments. Some of the more common ones include:

  • Performance Environment: Dedicated to gauging the application’s efficiency and response times.
  • Security Testing Environment: The primary focus is to assess the application’s resilience to vulnerabilities and threats.
  • Alpha/Beta Testing Environments: These are preliminary versions of the application made available to a restricted group of users for early feedback.
  • Feature Environments: New functionalities can be evaluated in a standalone domain before being incorporated into the primary integration environment.


The software development process requires a series of specialized environments tailored to different stages of its lifecycle. The number and nature of these environments can vary based on the size and requirements of the project. For example, in some cases, to optimize workflows, the integration and testing environments might be combined into one, providing a unified platform for both merging code and conducting initial tests.

While performance-focused environments hold their importance, with the proper monitoring tools, the production environment can occasionally negate the need for a separate performance environment.

In conclusion, the software development environment isn’t a one-size-fits-all approach. It demands careful planning and customization to fit a project’s specific goals and needs. Making the right choices in setting up these environments is critical to ensuring a smooth journey from idea to launch, ultimately delivering top-notch applications.


Róbert Ďurčanský
Senior Fullstack Developer

Róbert is a highly skilled Senior Fullstack Developer with over 15 years of experience in the software development industry. With a strong background in back-end and front-end development and UX&Graphics and a passion for delivering high-quality solutions, Róbert has proven expertise in a wide range of technologies and frameworks. He is proficient in TypeScript, Angular, Java, Spring Boot, Kotlin, and AWS Cloud Solutions, among others. Throughout his career, Robert has worked on various projects, including e-commerce platforms, financial systems, and game development.

The entire Grow2FIT consulting team: Our team

Related services

Team of professionals

Back to all news

Reference: Raiffeisen Bank International – Designing a Digital Bank’s Data Architecture

Raiffeisen Bank International (RBI), a prominent banking group, was on the journey of launching its new digital banking platform. With the rapid digitization of banking services and the increasing demand for seamless online customer experiences, RBI recognized the imperative need for a robust and adaptable data architecture. While the bank had in-house teams proficient in traditional banking systems, they sought external expertise to harness the full potential of contemporary cloud technologies.

The Problem

RBI’s vision of its digital bank was modern, agile, and future-ready. The challenge was twofold:

  • Designing a data architecture that would be scalable, efficient, and capable of handling the vast influx of digital transactions.
  • Ensuring that the architecture, while modern, would remain compliant with internal and external regulations and seamlessly integrate with RBI’s existing systems.

Our Solution

Our specialized team of Data Consultants delved into the project with a two-pronged approach:

  • Serverless and Cloud-Agnostic Architecture: Our design principles prioritized a serverless framework on AWS. This not only ensured automatic scalability without the overhead of managing servers but also brought down operational costs. Moreover, by designing the architecture to be cloud-agnostic, we ensured that RBI would not be tethered to a single cloud provider, granting them flexibility and resilience in their digital endeavours.
  • Integration and Compliance: Acknowledging the paramount importance of security and regulation in the banking sector, our solution was meticulously tailored. We:
    • Conducted a comprehensive Requirements Analysis to ascertain the bank’s needs and align our design accordingly.
    • Crafted the Data Architecture and Data Processing blueprint utilizing a suite of cloud-agnostic services, ensuring optimal data flow, storage, and retrieval mechanisms.
    • Ensured Internal Regulation Compliance by integrating the architecture with RBI’s internal environment, embedding requisite security measures, and devising a robust security concept.


With our intervention, Raiffeisen Bank International now boasts a state-of-the-art digital banking data architecture that stands as a beacon of efficiency, resilience, and adaptability. The bank is poised to deliver unmatched digital banking experiences to its customers while staying ahead of the curve in the rapidly evolving fintech landscape.

Provided services

Key Technologies

  • AWS

Team of professionals

Back to all news

Welcome Marián Ivančo: Software Architect with 20+ Years of Experience

We are pleased to announce Marián Ivančo has joined our team. With over 20 years in the field, Marián has extensive experience in designing and implementing complex IT systems. His work has covered a range of sectors, including finance, gaming, and energy.

Marián is adept at migrating from legacy systems to modern container solutions. His technical expertise includes Java, Kubernetes, cloud solutions, and container platforms. Throughout his career, he’s played pivotal roles in large-scale system integrations and migrations.

We’re looking forward to Marián’s contributions and the wealth of experience he brings to our team.

Check our other Senior Consultants here

Team of professionals

Back to all news

Our Summer Teambuilding Adventure Was a Splash!

🌊☀️ Had an absolute blast at our summer teambuilding event! 🚣‍♂️🏄‍♀️

Check out this video for a sneak peek of our adventurous day filled with rafting, surfing, and more! 💦
Grateful for a team that knows how to work hard and play hard. 💪😄

Team of professionals

Back to all news

Reference: Atlas Group – Monitoring, Support, and Infrastructure Development

Atlas Group is a technology-driven organization that relies on Kubernetes for its infrastructure. They sought assistance in monitoring, support, and problem-solving in their Kubernetes environment. Additionally, they required help in setting up a distributed block-based storage solution based on LINSTOR to provide persistent volumes for their pods or NFS storage.


Grow2FIT, a service provider specializing in Kubernetes and infrastructure management, partnered with Atlas Group to address their needs. The following services were provided:

  • Monitoring and Support
    • Implemented a monitoring system to identify issues or anomalies in the Kubernetes environment proactively.
    • Established a support mechanism to address and resolve problems encountered by the Grow2FIT team promptly.
    • Responded to requests for assistance regarding Kubernetes and other related technologies.
  • Problem Solving and Consultation
    • Provided consultation services to Atlas Group, offering expertise and guidance in problem-solving and troubleshooting within the Kubernetes ecosystem.
  • Infrastructure Development
    • Upgrading Kubernetes to newer versions, ensuring smooth transitions and minimizing disruptions.
    • Engaged in ongoing maintenance and problem resolution related to Kubernetes and other infrastructure components.
  • Distributed Block-based Storage (LINSTOR)
    • Assisted Atlas Group in setting up a distributed block-based storage solution based on LINSTOR.
    • Configured LINSTOR to provide persistent volumes for their pods, enabling data persistence and reliability.
    • Integrated NFS storage into the infrastructure, leveraging LINSTOR’s capabilities to enhance storage capabilities.


  • Swift identification and resolution of issues through proactive monitoring and responsive support.
  • Successful implementation of LINSTOR, providing reliable and persistent volumes for their pods.
  • A collaborative partnership between Atlas Group and Grow2FIT ensured ongoing support and consultation, enabling their infrastructure’s seamless development and enhancement.

Provided services

Key Technologies

  • Kubernetes

Contact Person

Tomáš Řehák, Head of Engineering

Team of professionals

Back to all news

We’ve moved to The Spot Bratislava

We are thrilled to announce our move to a new office in The Spot Bratislava! Our fresh and inspiring workspace is all about growth, innovation, and collaboration. Come visit us, enjoy a cup of coffee, and see our new environment for yourself!



Team of professionals

Back to all news

Meet Róbert Ďurčanský: A Highly Skilled Senior Fullstack Developer

We are delighted to introduce Róbert Ďurčanský, a seasoned Senior Fullstack Developer with over 15 years of experience in the software development industry. Róbert brings a wealth of expertise in back-end and front-end development, UX&Graphics, and a passion for delivering high-quality solutions.

With proficiency in technologies like TypeScript, Angular, Java, Spring Boot, Kotlin, and AWS Cloud Solutions, Róbert is well-versed in a wide range of frameworks and tools. His technical prowess enables him to adapt to evolving technologies, ensuring efficient and innovative solutions for complex projects.

Throughout his career, Róbert has contributed to diverse projects, including e-commerce platforms, financial systems, and game development. His ability to easily tackle challenges and meticulous attention to detail have consistently delivered remarkable results.

Check our other Senior Consultants here

Team of professionals

Back to all news

Case study: Teradata to Snowflake migration for a large retailer

Customer situation

The customer is a leading FTSE 100 UK-based retailer operating a large (approx 300 TB, 10.000+ tables, 100.000+ columns, 30.000.000+ new transactions per day) data warehouse on the Teradata platform. Reports and data from it were used primarily by the finance department and many other teams to manage their performance and input into various analytics tools.

The customer is ongoing a transformation onto a strategic platform. Snowflake was selected as the best-fitting, most performant solution. This strategic platform is also coupled with a completely new data modeling approach according to the Data Vault 2.0 standard. But as this is a long-term project, it was necessary to find an interim solution to address issues (low performance, expensive to operate) of the current Teradata DWH as soon as possible.

Selected approach

For the interim solution, our UK partner company LEIT DATA, selected migration of existing Teradata DB into Snowflake. We decided to keep the current data model to retain the backward compatibility of reports and integrations and to be as quick and efficient as possible. This enabled us to maintain existing reporting tools (e.g., SAP BusinessObjects) with only a minimum tweak. The strategic project also includes a new reporting solution (PowerBI) successfully integrated with the new Snowflake DB.

The Teradata ingestion pipeline consisted of many stored procedures run by various triggers. This solution was replaced with a more maintainable set of Python scripts ingesting data from S3 batch files already generated for the Teradata solution.

We also found that the current Teradata security model wasn’t manageable and unscalable as it consisted of more than 1 million individual SQL statements (“GRANT”s). We implemented a new security model leveraging the native Snowflake data classification model. This enabled the customer to control access to columns and tables containing sensitive PII data efficiently.

The migration took a team of approx—10 people over 1.5 years. Much effort was spent on extensive testing to ensure that the reports were accurate “to the penny.”

High-level architecture transition


Data Warehouse migration to Snowflake enabled the customer to decommission legacy Teradata platform support and maintenance costs and eliminate a dedicated team of 8 Teradata support contractors, replaced by a smaller permanent Data Engineering squad focusing on strategic data value products.

This alone resulted in multi-million £ per year‎ savings. Snowflake made us rethink how the team delivered Data Products and optimized team effectiveness. This led to significant optimization in decreased time-to-market from 3+ months to less than four weeks.

Snowflake Data federation allows easy data sharing of migrated database (in legacy format) with the new strategic data warehouse (in Data Vault format). This accelerated the migration path to the strategic data platform.

It also had these additional benefits:

  • Orders of magnitude speed up report generation and data processing.
  • An easily manageable, scalable, and auditable security model ensures full GDPR and PII protection compliance.
  • Reduced complexity for data visualization, science, and analytics communities within the organization increases productivity.

Lessons learned

Here are key issues we came across during the project and lessons learned from them:

  • Large volume data egress from the Teradata platform seems throttled on the hardware level. Export ran extremely slow (300 TB took a month to export), and we investigated every other issue (network stack, landing zone, etc.). We concluded that the Teradata platform itself causes the root cause.
  • Teradata platform has strange behavior of decimals rounding. This was further augmented by the lousy design of the original data model (use of float instead of decimal for storing financial data). This led to different results when reconciling and cross-checking reports from Teradata vs. Snowflake. Each such discrepancy had to be investigated fully, resulting in a lengthy testing period.
  • Some companies provide services for out-of-support Teradata infrastructure (e.g., replacing failed disks). They may be interested in buying out existing systems after migration.
  • As part of any large-scale data migration, it is suggested to review all existing reports to see which ones are not used at all or very sporadically. This can be done by reviewing report access logs or replacing reports with unclear users or usage with a static text to contact the migration team. The goal is to eliminate legacy reports and thus reduce overall testing efforts.

Contact us to get started

Our team participated in critical architecture design and delivery management roles. Contact us for a free assessment session where we will, together with your data leadership team, evaluate the potential for savings and enhance your agility to deliver data-value products.

Related services

Team of professionals

Back to all news

Meet Our New Colleague: Welcoming Pali Jasem to Our Team

We are happy to introduce you to our new colleague, Pali Jasem, an experienced professional with over 20 years in IT and consulting. Pali’s experience spans a wide range of areas, including data processing, artificial intelligence, business analytics, knowledge discovery, UX/CX, solution architecture, and IT product management.

Before joining Grow2FIT, he held the position of CTO at GymBeam, where he helped grow the company and build the IT team. Previously he worked for companies such as Pelican Travel, Solar Turbines San Diego, Seznam Prague, and other tech start-ups and corporations.

He is currently working as a business architect on a web applications development project for our client Solargis, where he applies his experience in business analysis and architecture. We are happy that Pali will expand our expert team and wish him many personal and professional successes.