Growth is not an accident. It’s a perfect fit.

Back to all news

Reference: Solargis – Custom development of web applications

The climate crisis is the most significant challenge humanity is currently facing, and there is no quick solution available. At Grow2FIT, we always like to work on projects that improve this situation.

One of the main factors contributing to the climate crisis is energy produced from fossil fuels. The solution is to switch to renewable energy sources.

Solargis provides meteorological data, software solutions and consultations for operators of photovoltaic power plants to reduce risks and optimise their performance. Independent institutions have repeatedly awarded the data supplied by Solargis as the most reliable worldwide.

The 10-member Grow2FIT team, consisting of analysts, SW architects and developers, designs and implements new functionalities of web applications through which Solargis clients access meteorological data and functionalities. Our team works on complex requirements and developing a modern serverless cloud solution built on a microservice architecture. Thanks to the involvement of the Grow2FIT team, Solargis can deliver new functionalities and products to end clients faster and has enriched the internal team with the know-how gained from previous projects.

Client Statement

„The Grow2FIT team is an important partner for us. Thanks to its experience and flexibility, it quickly integrated into our software development structures, speeds up our development and brings new perspectives. “

Marcel Šúri, CEO

About the client

Solargis is a technological and consulting company providing services to companies engaged in developing and operating solar power plants. It provides meteorological data, in-house developed software, and technical advice on evaluating and managing solar energy production.

Thanks to 21 years of in-house focused research and technology, Solargis has become a leader in this field. Today, it works for over 1,000 medium and large organisations from 100+ countries, helping them make qualified financial decisions, improve technical design, and optimise operations. Solargis data, products and services have become essential to solar power plant projects development. It also provides solar energy performance evaluation, monitoring and forecasting services.

Provided services

Key Technologies

  • Angular
  • TypeScript
  • AWS – CloudFormation, DynamoDb, Lambda, S3

Growth is not an accident. It’s a perfect fit.

Back to all news

ETL: no servers = no worries

Server-less is an approach to performing computing operations and running solutions without having to manage servers, even virtually. Those who manage tens to hundreds of servers know that this is a good dose of responsibilities, and getting rid of them is not only pleasant but also efficient. As the biggest cost item is usually the cost of administration.

There are currently several options for implementing a serverless ETL process. We will briefly describe them using services in the AWS cloud.

AWS Glue

One of the basic services for the ETL process is AWS Glue. AWS also presents this as an ETL without infrastructure. In principle, this is an execution of Spark’s tasks implemented on a scaled AWS infrastructure. Individual tasks can be written in Python or Scala and are executed in a distributed way over data located in the S3 object repository (another service without servers). AWS Glue allows you to view data, transform it and make it available for viewing and querying via SQL.

Simply put, data (for example in CSV or Parquet) is processed into a structure and can then be accessed as a table. No matter how big it is and how many files it consists of. Other resources can be a relational database (AWS provides several of them, such as PostgreSQL or Oracle), Mongo DB, or another database accessible via the JDBC connector.

AWS Glue Catalog, a metadata catalogue, provides support for structuring semi-structured data. It is built on the Hive Meta catalogue (supports Hive DDL), and the API has the same Apache Hive Meta catalogue. It stores information about tables, columns, partitions, etc. In addition to the data catalogue, working with data is also supported by another service – the schema registry.

The processing of source data is the task of crawlers, which are tasks executed at a specified time or based on an event. Of course, individual tasks can also be planned or merged into follow-up steps and create a workflow.

AWS Lambda

An interesting option for use in the ETL process is to use the AWS Lambda service. This is a service where the individual parts of the processing are translated into the code, and the AWS Lambda service directly performs it. It is a code-execution engine where events trigger the written code. Events can come from another AWS service (eg adding a file to S3), from a scheduler, or data flow control service.
Pieces of code are in the form of functional units, which can also be packed into containers. Supported languages are Node.js, Python, Ruby, Go, Java, and C#. Of course, the code in the containers can be any. Container size is limited to 10 GB and allocated memory for each function can be up to 10 GB. Up to a thousand functions can run at one time. Code (function) run life is limited to 15 minutes. For more demanding tasks, the use of a parallel run is assumed.

AWS Batch

If 15-minutes is not enough for one task, you can use AWS Batch. This is again a service where server maintenance is not expected. In this case, however, it is not completely without servers. AWS Batch is a service where the user can run demanding batch jobs on an elastic cluster of servers or containers. Standard images of EC2 instances are used for the server running and for containers, it is the ECS Fargate service (de-facto platform for running containers as a service). There is no need to take care of servers, their administration, maintenance, etc. The batch job works in such a way that AWS Batch initiates the “creation” of a cluster of servers, on which the job runs, even for hours or days. When the job is completed, the servers or containers are “shut down” and “dropped.”

The service is useful for batch data processing, but there are practically no limitations because the limit is only whether it can run on operating systems supported by AWS, resp. in containers supported by AWS. The service is “free”. You pay just for the seconds of power used.

Amazon Athena

As part of the ETL process, it is usually necessary to access the data for querying or crawling. Amazon Athena is dedicated to these tasks. It is an interactive query “engine”, where the user accesses data through the JDBC connector and through SQL queries browses the data, or transforms, aggregates, etc.
Data stored in the form of files are stored in the S3 repository. Amazon Athena is integrated with the AWS Glue Catalog and thus acts as a table or database for the user. No matter how big the data is and how many files are actually stored. Of course, queries can also be stored and called through a scheduler (AWS Step Functions service) or based on an event or API call.

It is a serverless service, and the service itself takes care of performance and scaling. It is charged in the form of a fee for the amount of data read ($ 5 for 1 TB of data).

This service is also the primary service for access by business intelligence or reporting tools. ODBC and JDBC protocols allow connection from, for example, Qlik, Tableau, DBeaver, or other similar tools for data manipulation via SQL language.

Conclusion

In conclusion, we can state that in the AWS cloud it is possible to build complete data processing without the need for a single server or infrastructure administrator. Another advantage is that the method of charging is strictly “pay per use”, so you only pay for the power consumed, and there is no need to allocate anything. Related services such as metadata catalogue, processing planner, flow designers, etc., are free.

Author

Milos Molnar fotoMiloš Molnár
Grow2FIT BigData Consultant

Miloš has more than ten years of experience designing and implementing BigData solutions in both cloud and on-premise environments. He focuses on distributed systems, data processing and data science using Hadoop tech-stack and in the cloud (AWS, Azure). Together with the team, Miloš delivered many batch and streaming data processing applications.
He is experienced in providing solutions for enterprise clients and start-ups. He follows transparent architecture principles, as well as cost-effective and sustainable within a specific client’s environment. It is aligned with enterprise strategy and related business architecture.

The entire Grow2FIT consulting team: Our team

Related services

Growth is not an accident. It’s a perfect fit.

Back to all news

The time of QR codes and instant payments is coming

The year is 2018, and a new type of product is being created in the EPC (European Payment Council) bodies - Immediate payments in EUR. This is a significant change in the speed of capital movements within Europe.

What will it bring?

First of all, there are great possibilities for entrepreneurs and consumers (citizens) to pay for services or receive payments for delivered goods in real-time.
Secondly, it allows clients to pay online for services without using a payment card (an alternative payment method outside the card scheme was a long-term requirement of entrepreneurs due to the high fees of accepting card payments).

What has changed?

Banks in the EU have joined the system where it is possible to make a payment within 10 seconds (also with verification that the beneficiary’s account is correct and valid).

Banks used the ECB’s (European Central Bank) systems, which allowed them to transfer funds quickly under the SEPA Instant Scheme.

To use instant services as efficiently as possible to pay for services, it is necessary to have an efficient way of receiving payments using a QR code or a universal payment link. The QR code/payment link contains all the essential details for making the payment. The beneficiary’s account number, amount and may also include a payment identifier (E2E reference or payment reference). After scanning by the payer, the QR code/payment link will immediately process the payment in the financial institution’s internet banking / mobile banking and use the immediate payment to send the funds to the payee’s account. He can immediately dispose of the funds after crediting (not as in the card scheme where the funds come to the account for several working days).

QR codes and their structure can have different definitions and standards. There are several used in the world (we will be happy to advise you).

What can we use the Instant Scheme with QR codes for? We will describe it with the example of a self-employed person.

After performing the agreed work, the self-employed person can immediately get paid for his services based on the QR code. They do not have to wait several days for an invoice to be paid, have a payment terminal (where payment would come in a few days after execution), or ask the payer to pay in cash (then it is necessary to deposit money into the account). After creating a QR code/payment link and reading it by the payer (via the mobile application), he immediately receives money in his business account. If the application has the option to generate an invoice/block, he sends it electronically to the payer or records the data immediately in his accounting.

The solution can be used in other areas, such as small businesses for which the card scheme and terminal would be too expensive, or use the possibility of QR codes to make payments and offer various discount schemes.

Growth is not an accident. It’s a perfect fit.

Back to all news

What was the first year of Grow2FIT?

Starting a new company during a global pandemic and a newly announced lock-down may not seem like the best idea. Inability to physically meet potential business partners or online recruitment was a severe complication. Quality and fair relationships from the past and well-established products and services proved that they are essential. In addition, great determination and drive playfully overcame these complications.

Over the past 12 months, it has also been shown that the ideas on which we are built Grow2FIT have found a response in the market:

  • We search for and bring innovative topics and solutions.
  • We adapt intensively to the needs and possibilities of clients (in terms of scope and deadlines of solutions or flexible allocations).
  • We place great emphasis on finding interesting and demanding projects for our coworkers. Thus the know-how of the whole company grew.

In 2021, we managed to build a team of 30 IT specialists from scratch. And what did we do? For example, these interesting projects:

  • We are excited to work for the Slovak company Solargis which provides worldwide successful products. Our team of 6 analysts and developers together with an internal team of Solargis develops solutions that provide meteorological data to operators of photovoltaic power plants.
  • We have established cooperation with Greyson Consulting – one of the top consulting companies in the Czech Republic and Slovakia. We provide Greyson consulting services in the field of Banking and Big Data in the design and implementation of digital banking for a large Austrian banking group. We also help to merge 2 Czech banks.
  • Consulting services and deliveries in the field of DevOps and modern infrastructure (Kubernetes, Ceph, MaaS) for Slovak company InterWay.
  • Last year, we provided several teams or individual IT specialists to successful IT suppliers such as Unicorn Systems, ADASTRA, CGI.
  • At the end of the year, we made an agreement and from the new year, there will be our 7-member team working on the design and implementation of the EU Sovereign Cloud. When done, this will provide the EU companies doing business in sensitive areas (Public, Healthcare …) the opportunity to operate their services in the EU cloud and get rid of addiction on predominantly US cloud operators.

Our vision is to build an international team of IT specialists and thus mitigate the risk of a lack of specialists in the Slovak labor market. We have devoted so much energy in the past year in order to set up recruitment in the Balkan countries (Serbia, Bosnia, and Herzegovina, Croatia, Macedonia) and today we have the first colleagues from these countries in the team.

An important milestone of the company was the arrival of Peter Brtáň in the summer as the second Co-Founder. He helped to intensify our business and delivery activities in the second half of the year and prepare for our goals in 2022 – further growth in the domestic Czech-Slovak market and expansion abroad.

We thank all business partners and co-workers for their trust, great cooperation, and fair relations – it was a great year, and we wish you health and taste to embark on new adventures in the new year.

Growth is not an accident. It’s a perfect fit.

Back to all news

A modern approach to the integration platform

Rapid development of new technologies such as containers (Kubernetes etc.) and cloud technologies (Azure, AWS, Google etc.) opens a new type of problems and challenges. The world is ruthless in this regard and keeping up with modern trends and technologies is not quite easy.

Converting all existing workloads into modern ubiquitous containers so that we achieve the ideal state of a one-color technology stack that is easy to maintain is in many cases unrealistic and inefficient. Both in terms of technology and in terms of costs. This ideal situation is very difficult to achieve, especially in large companies (banks, insurance companies, etc.).

A new type of problem that we must face is, how to integrate all workloads hosted on virtual machines, containers or operated on WebSphere, WebLogic, etc., simply and with the least effort. How to achieve the possibility of monitoring individual operational services in such a heterogeneous environment? How to manage access and administration to individual services?

The answer is simple – Service Mesh. Today’s technologies based on Service Mesh allow you to:

  • to address most of these challenges and the problems associated with operation of services in large environments
  • have a sufficient overview of the services
  • simplify the operation of distributed applications with all the pitfalls associated with them, especially in the context of security
  • achieve clearly managed distribution of application configurations, advanced control and management of access data
  • simpler implementation of various deployment strategies such as Canary-Deployment, Blue-Green-Deployment, and others.

As you can already guess, all these advantages ultimately go hand in hand with accelerating of release cycles and releasing of new applications and functionalities to the customer.

Yes, the modern approach to Service Mesh can replace existing integration platforms with added value. Of course, it is necessary to consider the needs of the company to identify which Service Mesh technology is good to adopt in each individual case.

Most often in this context, we hear, “We’re not ready for this technology.” These worries are unnecessary, and this is the magic of a properly chosen Service Mesh, which will allow you to integrate legacy technology in a way that does not mean for you to make major changes in existing applications. Gradually, you can migrate all applications to containers or leave them in their original form.

If you are interested in the specific benefits that Service Mesh could bring you in your environment, contact us and we will schedule a free consultation with you.

Service Mesh architecture

Growth is not an accident. It’s a perfect fit.

Back to all news

Added value sourcing

The main factor that currently determines the success and failure of companies, programs, or projects is the quality of human resources. Human talent has thus become the most scarce resource nowadays – even more challenging to obtain than capital. Today, the competition in the labor market is at least as strong as in contracts and business opportunities.

How to search for a human talent

For this reason, companies that want to succeed in this strong competition must use all available forms to search for (and of course retain) human talent:

  • Internal recruitment (through own recruitment department)
  • External recruitment (search for employees through specialized companies)
  • Use of the sourcing service (leasing of human resources) – temporary allocation of employees of specialized external companies for their projects. The most common reasons for using a sourcing service are the following:
    • Short-term/temporary need for human resources – in case of a sudden large number of projects or the need to complete the project by the set deadline – in this case it does not make sense to employ new employees as it will be a problem to utilize them later.
    • Lack of specific know-how when it is more effective to involve a ready-made expert or senior in a given area, whether for full or partial capacity according to the needs of the project/customer.
    • Or the company simply cannot find suitable people for internal employment.

Grow2FIT offers you 2 of the above services – sourcing or external recruitment.

Added value of Grow2FIT

Why should you choose these services from our company Grow2FIT?

We believe that thanks to the following, we offer our customers unique services that meet their needs:

  • Maximum adaptation to the client’s needs – we offer various forms of our services
    • Long-term and short-term sourcing.
    • Finding employees for your internal employment = external recruitment.
    • We provide our specialists for a full or partial allocation, which you will use if you have minor, specific problems (in this case we are talking about the Consulting service).
  • We offer only verified specialists
    • We have implemented a sophisticated multi-round recruitment system.
    • We involve specialists in sourcing projects as well as our custom development projects to gain the widest possible experience.
  • International talents
    • We are looking for quality human resources not only in Slovakia and the Czech Republic but in the whole of Europe (what makes a perfect sense hence more and more projects are remote).
    • We place great emphasis on English and a willingness to travel as much as the current pandemic situation allows (eg always for planning of a new sprint).
  • We help build the careers of our employees and co-workers
    • We pay special attention to the proper career growth of our employees and co-workers and we always try to find them the most suitable project concerning their personal and professional level of development.
  • We have a wide scope – we primarily focus on the following areas (but we can also help you with other technologies).
    • Modern development (cloud-based applications, containerization, microservice architecture …) – Java stack, JavaScript frontend frameworks, Kubernetes, OpenShift, Docker, SQL, and NoSQL DB including SDS – Ceph
    • DevOps – CI/CD – development, testing, and deployment automation
    • Modern infrastructure – automated infrastructure management, design of cost-effective infrastructure with commodity HW (Design to cost)
    • DB solutions – BigData, SDS – Ceph, PLSQL development
    • Testing – solutions for automated tests, manual tests
    • Agile processes – coaching, consulting
    • Jira and other Atlassian tools – migration to Jira cloud, customization, and integration of Atlassian tools into your organization

Growth is not an accident. It’s a perfect fit.

Back to all news

How can Modern DevOps approach help you?

Traditional companies have the foundations of their IT departments in the 80s to 90s, strongly process-oriented and often in the form of dedicated teams for individual IT resources, such as storage, networking, security, infrastructure, and so on.

Traditional companies have the foundations of their IT departments in the 80s to 90s, strongly process-oriented and often in the form of dedicated teams for individual IT resources, such as storage, networking, security, infrastructure, and so on. Users who need these resources will request them using tickets. These requests will be processed by a team member. This ticket resource management suffers from many ailments. There is still a chance, that the team member may misunderstand the request. Resulting in not delivering it in the required quality or make a human mistake, for example in the form of a typo.

Processing more complex requirements consisting of a combination of these resources multiply these problems at a higher level – individual teams have different priorities, motivations, and discipline. It takes an extremely long time to complete a more complex request. The difficulty of obtaining resources for your project creates the preconditions for creating a “gray” infrastructure, such as a server thrown somewhere in the office with the latest Docker installed, and so on, because it is “faster”. At the same time, there is a need for some kind of handlers whose job it is to push for tickets to be dealt with, often through a personal visit and standing over a person until they have met the request. This leads to a loss of motivation of the people in the teams. All-day long, they only deal with user requests, and therefore a kind of operation, often someone goes to them, with the fact that they need “this” and “now”. So it’s easy to work a lot, but without the substance of the matter being solved more systematically. As a result, it can take weeks to months to make changes or deploy new environments in which new applications will run. This is a big difference compared to the hypothetical fintech startup from Asia, which can deploy its application twice a day without disrupting the operation of the service, and thus converge faster to the required functionality.

However, DevOps itself is only the culmination of the “agilization” of the organization, and the effort to run it in a traditional corporate environment is doomed to failure because the transaction costs of obtaining an IT resource are high.

Benefits of modern DevOps

The term DevOps is often misused only as a trendy name for the old administration team. The DevOps concept arose from the need for a quick response to the rapidly changing situation in young Internet startups.

Its beginnings go back to companies such as Facebook and Google, respectively, which grew extremely fast – they gained daily about of million new users who wanted to use the service, which put enormous pressure on a rapid sequence of changes. Technologies and processes that have worked with fewer users quickly stop working and need to be changed. Therefore, operating during such growth looks more like changing the engine and wheels on a car at full speed on the highway without interrupting operation.

Among the main benefits of implementing the DevOps principles, we consider:

  • higher solution quality
  • automation = lower error rate
  • faster time to market
  • lower turnover of IT specialists due to stereotypical work
  • higher employee engagement

Prerequisites

The implementation of modern DevOps requires the fulfillment of certain prerequisites:

  • Quality and experienced implementation team
  • Management support
  • Support for end-to-end changes (technological, process, and organizational)
  • Involvement of other organizational units in IT in changes