Low-code development can be used for managing big data. Here’s how to make it work for your organization.

Low-code development tools are aimed at reducing application development time to market. They do this by using graphical user interfaces with point-and-click tools to cobble together applications, making it easy for end user “citizen developers” with no knowledge of programming to develop apps. Low-code tools also allow IT technical staff to insert custom code to endow the apps with functionality that the low-code tools can’t generate on their own.
SEE: Everything you need to know about using low-code platforms (free PDF) (TechRepublic)
Low-code development tools are gaining popularity in companies because users see them as ways to get around IT logjams that prevent them from getting things done.
The market concurs. By 2024, Gartner predicts that 65% of all application development will be done with low-code platforms, and that 66% of large companies will use at least four different low-code application building platforms.
Yet, low code has its limits. For example, low code is designed to work with transactional data in fixed record lengths. This makes low code a non-starter when it comes to working with unstructured big data—or does it? There are ways to use low code with big data if there is enough business value to warrant developing the methodology to facilitate it.
Here’s how it can work. Since low-code development must work with fixed records containing clearly delineated data fields within them, the major task is formatting unstructured big data into fixed records.
Here are the steps:
IT and business users should identify the specific business problem or use that the application is intended to address, along with types of big data that will be needed. During this step, business users, IT, and data science (if there is a separate DS department) should also identify the big data that will not be needed by the application because you don’t want to bring in any more big data than is necessary, as it creates needless overhead and bogs down processing.
This is a task for data scientists, who will be asked to develop a data filter in the form of an artificial intelligence (AI) algorithm that will eliminate any big data not needed before the data is forwarded to the low-code application.
For example, if you are using big data weather forecasts for the Midwest, but don’t need to know the weather in Australia, the algorithm-filter can exclude any data that doesn’t pertain to the Midwest. This reduces big data file size.
Low-code tools come with predefined APIs (application programming interfaces) to major software packages, but they don’t have APIs for every system.
In this step, IT analyzes which systems the low-code app needs to access and determines if there are any missing APIs. If an API doesn’t exist, It might be necessary to code one.
In this step, IT selects the data from the unstructured big data needed for the app, parses the unstructured data into data field chunks, and then formats the data into fixed fields within a fixed record.
SEE: Low-code platforms help with project backlogs and software development training (TechRepublic)
The big data that has been formatted into fixed records must be able to match up with the data records in other systems that need to be accessed.
An extract-transform-load (ETL) tool, informed by the business rules that IT defines in it, can do this step automatically.
The final step is executing the low-code app to see if it picks up the right data, processes properly, and returns the results that the business expects.
Converting big data to run with low-code applications is time consuming but well worth it if there are enough low-code apps that can be developed off the converted big data that can be continuously reused, providing high business value to the business. 
Business value is the key, because the bottom line is always whether the data conversion effort is worth it.
Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays
Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o…
Hiring kit: IoT developer
8 must-have tools for developers on Linux
Hiring Kit: IT finance manager/budget director
Feature comparison: Time tracking software and systems


Leave a Reply