-Awesomation-
  • Home
  • IT Transformation Basics
  • Decision Helper
  • HyperAutomation
  • DATA AI ML
  • The Fusion: Au-AI
  • Low Code World
  • Journeys & UseCases
  • Get in touch
  • External Useful Resources

A *Beta* resource that is getting ready to navigate the Digital Tech!

-Awesomation-
  • Home
  • IT Transformation Basics
  • Decision Helper
  • HyperAutomation
  • DATA AI ML
  • The Fusion: Au-AI
  • Low Code World
  • Journeys & UseCases
  • Get in touch
  • External Useful Resources

Are you AI ready?

Enabling the Artificial Intelligence

  

AI is the buzz word. Everyone wants to make use of AI for better analysis of the business and to gain more value.

With the people in the loop, it is a general understanding that if there is enough data, organizations can implement AI. 


Everyone expects the path to be

Data > Information > AI > Value


But the AI programs are extremely difficult to implement. The bigger the spread of the data, the more difficult the journey is. Most of the programs get stuck at the first hurdle – creation of a data lake. Even if the scope is limited, the merging of the data sources or making them accessible is a challenge to manage.

Once that happens, the typical steps for an AI program are


Data:

  • Sourcing the data
  • Selection of the data
  • Synthesis of the data


Applying Data Science:

Data Engineering:

  • Exploration:  An approach to analyzing data sets to summarize their main characteristics, often with visual methods.


  • Cleaning: Data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within a dataset.


  • Normalizing: Data Normalization involves tasks that make those data more convenient to customers. It includes processes like clean the data, removing duplicates, and conforming data to a specific data model.


  • Feature Engineering: engineering refers to manipulation — addition, deletion, combination, mutation — of your data set to improve machine learning model training, leading to better performance and greater accuracy.


  • Scaling:  scaling refers to changing the number of machines or the size of the machine depending on the size of the data to be processed. Increasing the number of machines or the size of the machine is called scaling up , and decreasing them is called scaling down.


That is followed by the following steps.


Modelling:

  • Model Selection
  • Training
  • Evaluation
  • Tuning


Extracting Value:

  •  Registration
  •  Deployment
  •  Monitoring
  •  Retraining


While all the of the above is happening, let’s not forget the following constraints:

  • Legal issues
  • Ethical issues
  • Historical bias
  • Security
  • Data authenticity


There are good references available to go into the details of all the above steps.

Before an AI program starts, the organizations need to think through all the above to avoid pitfalls.


IT Transformation Basics

<< Go Back

Awesomation

IT's not rocket science!

Copyright © 2024 Awesomation - All Rights Reserved.

Disclaimer: The content is based on experience of the author. May not contain some areas specific to certain aspects of technology usage. 

May also seem very similar to such publications at places.

Powered by Passion

Hello World!

Awesomation is a work-in-progress Beta resource! 

Knowledge is being updated to make it awesome for you :)

The skeleton is ready and good to paint a Mind-Map of Digital Transformation, HyperAutomation, AI/ML & Low Code!

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept