0% found this document useful (0 votes)
10 views37 pages

08 Gtu TPT Report

The document is an internship report by Chhunchha Sankalp A. from Gujarat Power Engineering and Research Institute, detailing an internship at Tripearltech Pvt Ltd. It covers various topics such as data mining, web scraping, and the use of Python libraries for data extraction, along with the author's learning experiences and acknowledgments. The report emphasizes the importance of data analysis in business and provides insights into the tools and techniques used during the internship.

Uploaded by

shivam.gperi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views37 pages

08 Gtu TPT Report

The document is an internship report by Chhunchha Sankalp A. from Gujarat Power Engineering and Research Institute, detailing an internship at Tripearltech Pvt Ltd. It covers various topics such as data mining, web scraping, and the use of Python libraries for data extraction, along with the author's learning experiences and acknowledgments. The report emphasizes the importance of data analysis in business and provides insights into the tools and techniques used during the internship.

Uploaded by

shivam.gperi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

GUJARAT TECHNOLOGICAL UNIVERSITY

Gujarat Power Engineering and Research Institute, Mehsana


Academic year
(2022-2023)
GPERI COLLEGE
INTERNSHIP REPORT UNDER SUBJECT OF
SUMMER INTERNSHIP (3170001)
B.E. SEMESTER-VII
BRANCH – Computer Engineering
Submitted by :- Chhunchha Sankalp A.
Tripearltech private limited

Prof. Vaishnavi Solanki Mr. Harsh Makwana

(Internal Guide) (ExternalGuide)


Company Profile

Company Name: Tripearltech Pvt Ltd


Address: 911, Satyamev Eminence, Science city road, Ahmedabad
Contact No: +91 740548497254 897
Email Id : info@tripearltech.com
Website: www.tripearltech.com
About Us:

Trusted Microsoft Partner for Dynamics and Cloud Solutions, We are one of India’s top
Microsoft Dynamics 365 partners. Our team has more than 21 years of experience and
more than 9 business verticals of expertise, which makes us exceptional for ERP and
CRM solution providers nationwide. Dynamics 365 deployment is making the lives of
Small and Mid-sized Businesses much simpler.

At Tripearltech, we see Innovation as a clear differentiator. Innovation, along with focus


on deep, long-lasting client relationships and strong domain expertise, drives every facet
of our day-to-day operations.

For Dynamics 365 implementation, customizations, upgrades, consulting, and training,


we offer dedicated developers and consultants.

Mission:

Allow businesses to see through their balance sheets and dissect deep into the statistics in
each aspect of their business.

Value:

Timely delivery of the solutions. Standing by the side of the clients before, while and
after the delivery.
JOINING LETTER
COMPLETION CERTIFICATE
ACKNOWLEDGEMENT

First I would like to thank Mr. Harsh Makwana, Director and Miss. Foram
Vithalani, HR of Tripearltech Pvt Ltd. for giving me the opportunity to do an
internship within the organization.

I also would like all the people that worked along with me in the
organization with their patience and openness to create an enjoyable
working environment.

I am highly grateful to Principal Dr. Chirag Vibhakar Sir , for the facilities
provided to accomplish this internship.

I would like to thank my Head of the Department Prof. Mahesh Prajapati, for
the constructive criticism throughout my internship.

I would like to thank Prof. Vaishnavi Solanki, internship guide, Department


of CSE for their support and advice to complete internship in Tripearltech
pvt Ltd.

It is indeed with a great sense of pleasure and immense sense of gratitude


that I acknowledge the help of these individuals.

I am extremely grateful to my department staff members and friends who


helped me in successful completion of this internship.
ABSTRACT

Data mining is the process of sorting through large data sets to identify
patterns and relationships that can help solve business problems through data
analysis. Data mining techniques and tools enable enterprises to predict
future trends and make more-informed business decisions.

Web scraping is the process of using bots to extract content and data from a
website.

Power Automate is a service that helps you create automated workflows


between your favorite apps and services to synchronize files, get
notifications, collect data, and more.

Low code Power Automation makes work very easier as already basic code
is provided in the respective software.
Data Science and analysis is playing the most significant role today covering
every industry in the market. For e.g., finance, e-commerce, business,
education, government.

And so I learned the easiness of how I can make a low code application
rather than writing whole code from scratch and using the inbuilt code to
accomplish something more creative and innovative out of that.

Now organizations play a 360-degree role to analyze the behavior and


interest of their customers to make decisions in favor of them. Data is
analyzed through programming languages such as python which is one of
the most versatile languages and helps in doing a lot of things through it.

Netflix is a pure data science project that reached at the top through
analyzing every single interest of their customers. Key terminology that are
used in Data Science are: Data Visualization, Anaconda Jupyter Notebook,
Exploratory Data Analysis, Machine Learning, Data wrangling, and
Evaluation using scikit library’s surprise module
DAY – 1:
BASIC INTRODUCTION AND DOMAIN KNOWLEDGE!
● Data Mining :
Data Mining is a crucial component of successful analytics initiatives in
organizations. The information it generates can be used in Business
Intelligence (BI) and advanced analytics applications that involve analysis of
historical data, as well as Real time analytics applications that examine
streaming data as it's created or collected.

Effective data mining aids in various aspects of planning business strategies


and managing operations. That includes customer-facing functions such as
marketing, advertising, sales and customer support, plus manufacturing,
supply chain management, finance and HR. Data mining supports fraud
detection, risk management, cybersecurity and many other critical business
use cases. It also plays an important role in healthcare, government and
scientific research.

As there are there are so many ways that we can do data mining there are
two main ways to do data mining:

Types of Data Mining:


1. Predictive Data Mining Analysis
2. Descriptive Data Mining Analysis
1. Predictive Data Mining Analysis :
Predictive Data-Mining analysis works on the data that may help to know
what may happen later (or in the future) in business. Predictive
Data-Mining can also be further divided into four types that are listed
below:

1. Classification Analysis
2. Regression Analysis
3. Time Serious Analysis
4. Prediction Analysis

2. Descriptive Data Mining :


Predictive Data-Mining analysis works on the data that may help to know
what may happen later (or in the future) in business. Predictive
Data-Mining can also be further divided into four types that are listed
below:

1. Classification Analysis
2. Regression Analysis
3. Time Serious Analysis
4. Prediction Analysis

I researched a lot regarding the steps of data mining and scraping of the data
to get a lot of uniform data.
● Web Scraping :
Web scraping is the process of using bots to extract content and data from a
website.
Unlike screen scraping, which only copies pixels displayed onscreen, web
scraping extracts underlying HTML code and, with it, data stored in a
database. The scraper can then replicate entire website content elsewhere.

Web scraping is used in a variety of digital businesses that rely on data


harvesting .

Legitimate use cases include:


● Search engine bots crawling a site, analyzing its content and then ranking it.

● Price comparison sites deploying bots to auto-fetch prices and product


descriptions for allied seller websites.

● Market research companies using scrapers to pull data from forums and
social media (eg: for sentiment analysis).

Web scraping is also used for illegal purposes, including the undercutting of
prices and the theft of copyrighted content. An online entity targeted by a
scraper can suffer severe financial losses, especially if it’s a business
strongly relying on competitive pricing models or deals in content
distribution.

Web scraping has countless applications, especially within the field of data
analytics. Market research companies use scrapers to pull data from social
media or online forums for things like customer sentiment analysis. Others
scrape data from product sites like Amazon or eBay to support competitor
analysis.
Tools for scraping the data:
Python is more favorable for scraping the data because it has so many builtin
libraries like BeautifulSoup, Scrapy, Parsehub and so on. We can also use
Selenium because it is better to do automation by using it.

Some important things to keep in mind while web scraping:


● Web scraping can be used to collect all sorts of data types: From images to
videos, text, numerical data, and more.

● Web scraping has multiple uses: From contact scraping and trawling social
media for brand mentions to carrying out SEO audits, the possibilities are
endless.

● Planning is important: Taking time to plan what you want to scrape


beforehand will save you effort in the long run when it comes to cleaning
your data.

● Python is a popular tool for scraping the web: Python libraries like
Beautifulsoup, scrapy, and pandas are all common tools for scraping the
web.

● Don’t break the law: Before scraping the web, check the laws in various
jurisdictions, and be mindful not to breach a site’s terms of service.

● Etiquette is important, too: Consider factors such as a site’s resources—don’t


overload them, or you’ll risk bringing them down. It’s nice to be nice!
DAY – 2:
FINDING ALL THE API’s TO GET DATA FROM
TASK - 1:
API Research :
In the first task of day-2, I did research on API’s that can be used in python
through which we can get data of companies or recent trends in a particular
field.

An API is Application Programming Interface, it enables companies to open


up their applications’ data and functionality to external third-party
developers, business partners, and internal departments within their
companies. This allows services and products to communicate with each
other and leverage each other’s data and functionality through a documented
interface. Developers don't need to know how an API is implemented; they
simply use the interface to communicate with other products and services.
API use has surged over the past decade, to the degree that many of the most
popular web applications today would not be possible without APIs.

Types of API’s:
1. Open API’s.
2. Partner API’s.
3. Internal API’s.
4. Composite API’s.

These types of APIs we can use to scrape data and some might be paid and
might not be.
API’s found to extract data:
For facebook, there are three mostly common API’s:
1. Octoparse.
2. Graph API.
3. Visual Scraper API.

For Google, there are many API’s:


1. GData.
2. Extractor.
3. SERPMaster.
4. Android Management.

For Twitter, mostly used API:


1. Tweepy.

After doing the proper research I get to know some rules for scraping:

Using APIs it is legal to scrape data. Some applications allow scraping


whereas some applications do not allow us to scrape their data having some
legal privacy policies, if we try to still do the same the account from which
we are scraping might be banned after some tries.

Like google does not allow scraping as It is possible to scrape the normal
result pages but Google does not allow it. If you scrape at a rate higher than
8 keyword requests per hour you risk detection, higher than 10/h will get
you blocked.

Linkedin also does not allow us to scrape its data. After some tries we get
risk detection and the account gets blocked or banned.
TASK - 2:
Learn How to Use Python Libraries :
The second task was to understand the open source Libraries that will be
used during the application development in python. The Python Standard
Library contains the exact syntax, semantics, and tokens of Python. It
contains built-in modules that provide access to basic system functionality
like I/O and some other core modules.

The python libraries used in the project are:


1. Beautiful Soup.
2. Selenium.
3. Requests.
4. Tkinter.
5. Pandas.
6. time.
7. functions.

I get to know the libraries from python which I can use to scrape the data.

The best library I found to scrape the data is the selenium library for good
automation inside python to get to the data and after that the scraping will
start.
“For Learning Purposes Only”
DAY – 3:
A BOT TO SCRAPE DATA
TASK - 1:
Research for a website :
On the first task of day-3, I Researched for a webpage from where i would
be able to get data to scrape as a learner.

After doing a good research for hours, I found a webpage named Clutch
where I can get data which a learner can scrape.

*The data that I will be obtaining, will be used for my learning purposes
only and will be deleted.

Clutch is a B2B research, ratings and reviews site that identifies leading IT
and marketing service providers and software. Clutch evaluates companies
based on over a dozen quantitative and qualitative factors, including client
reviews, company experience, client list, industry recognition, and market
presence. Clutch helps companies manage their online reputation through
3rd party, verified reviews and increases their online visibility and traffic.
Services & Solutions offered by Clutch:
● Advertising & Marketing
● Search Engine Optimization (SEO)
● Mobile App Development
● Web & Software development
● Web Design
● IT Services & Solutions
● Business Services
TASK - 2:
Trying to Scrape Data:
I created a bot in python using different open source libraries of python.
Some of the libraries needed to be installed first before creating the
python application. The libraries are:
1. Beautiful Soup.
2. Requests.
3. lxml.
4. Pandas.
5. openpyxl.
6. selenium.

Among above libraries only four libraries are required to be imported.


like BeautifulSoup, Pandas, Requests and selenium libraries.

A brief description of python libraries:

A Python library is a collection of related modules. It contains bundles


of code that can be used repeatedly in different programs. It makes Python
Programming simpler and convenient for the programmer. As we don't
need to write the same code again and again for different programs.

BeautifulSoup library is used to parse html data. i.e: this library is used
to get html data elements into python directly from the internet. It is used
to navigate, search and modify the parse tree of the DOM.

Request is an elegant and simple HTTP library for Python through which
http requests can be made to a server. This library allows a developer to
make http requests on behalf of the system.

Pandas is an open source, BSD-licensed library providing


high-performance, easy-to-use data structures and data analysis tools for the
Python programming language. Pandas provides a rich set of functions to
process various types of data.
Further, working with Panda is fast, easy and more expressive than other
tools. Pandas provides fast data processing as Numpy along with flexible
data manipulation techniques as spreadsheets and relational databases.
Lastly, pandas integrates well with matplotlib library, which makes it a very
handy tool for analyzing the data.

Selenium Library is a web testing library for Robot Framework that utilizes
the Selenium tool internally. The project is hosted on GitHub and downloads
can be found from PyPI. SeleniumLibrary works with Selenium 3 and 4. It
supports Python 3.6 or newer.

To install the Selenium bindings in our system, run the command: pip install
selenium. As this is done, a folder called Selenium should get created within
the Python folder. To update the existing version of Selenium, run the
command: pip install –U selenium.

And thus, using these libraries I can start scraping the data from scratch and
can start developing a program about that.

By the end of the day, I have successfully created a bot that was able to
scrape data from clutch which a learner can access
“For Learning Purposes Only”
DAY – 4:
A BOT TO SCRAPE DATA
TASK - 1:
Write python program from scratch:
The data that will be collected from Clutch will be of web designers from all
location:

we need to make a request object that will request each time a new http
request is made.

i.e:

text=requests.get(“https://clutch.co/web-designers/”).text

the received data will be stored in a variable (text) , we need to pass that data
to BeautifulSoup object so that html data can be parsed easily.

i.e:

soup = BeautifulSoup(text, 'lxml')

after passing the data the data is now parsable easily, using BeautifulSoup’s
inbuilt given methods we can find the element and can get the text data from
it.

Eg: company_name = job.find('a', class_="company_title").text.strip()

The above line means find an hyperlink element(<a>) that has


class=”company_title”, get the text string and remove all the empty spaces
and store it in a company_name variable.
TASK - 2:
Finding Patterns for Pagination:
Suppose, we have a list of strings called book, if we page an index
(0-indexed) into the book, and page_size, we have to find the list of words
on that page. If the page is out of index then simply return an empty list.

And so in our program there are patterns in the URL through which
sometimes pagination can be done easily because the change is only the
page number added at the end of the url.

Matching the patterns and finding the pattern is a difficult task to do for a
whole website and after that also applying the logic regarding how, in
which form the pattern is matched.

Like I have to inspect all the pages, check the matching classes and the
html tags they looped inside and find the proper ID or XPath through
which I can put the respective data in the program changing the particular
brackets.

So as observed in the Clutch url the page attribute was adding up in the
url as pages were changed so to get data from all the pages the logic
needed to be inside of a for loop giving data to a formatted string in
python.

i.e:
for i in r:

text = requests.get('https://clutch.co/web-designers?page={}'.format(i)).text
soup = BeautifulSoup(text, 'lxml')

The extracting data from the web page will be also present inside the for
loop that will work on extracting the data while the page gets updated.
Like this the data from all the pages will be collected , stored in a list,
converted into a panda dataframe and then into an excel file.

Around 55,345 firms were extracted for web development from all around
the globe. It took about 25 minutes to extract the data.

And thus, by the end of the day I successfully created the pagination to crawl
through all the pages in the clutch and scrape the data.

After working hard for 2 days, I found out that this is the old school
method to develop a whole program from scratch, while there are
widely available sources from which I can use already made basic
programs and then do the real work of doing something creative in
relatively less amount of time.

By the end of the day I got to know that there’s a platform by Microsoft
called Power Automate where we can create different kinds of
Automation flows.
DAY – 5:
RESEARCH ON POWER AUTOMATE
Research on Power Automate:

I did research on power automate, Microsoft provides a learning platform


through which we can learn power automate. Many youtube videos are also
available for learning the platform.

Microsoft Power Automate is all about process automation. Power


Automate allows anyone with knowledge of the business process to create
repeatable flows that when triggered leap into action and perform the
process for them.

The automated tasks can be initiated manually, or they can be initiated


automatically, in a number of different ways. Power Automate is a service,
provided by Microsoft’s Power Platform, that helps you create automated
workflows between your favorite apps and services to synchronize files, get
notifications, collect data, and more.

Before Power Automate, Dynamics 365 administrators had to create the


workflows manually. To create any Workflows in D365 that run in the
background or in real-time, they contain all of the following components:

● Stage
● Check condition
● Conditional branch
● Default action
● Wait
● Parallel wait branch
● Create record
● Update record
● Assign record
● Send email
● Start child workflow
● Change status
Another disadvantage with workflows in D365 is it can only work with data
inside of Dynamics and requires data integration for outside data. For
additional functionalities, custom workflow extensions need to be created as
well.

Benefits of Power Automate:


● Enhanced Productivity
● Enhanced Participation
● Saves time
● Integrate with Legacy applications
● Pre-built connectors
● Cognitive services
● Data integrity
● Virtual agent and AI Builder integration

Using power automation reduces app development costs by 70%. Users can
create custom workflows for their organization with little coding technical
knowledge. Workflows in Power Automate can be triggered based on button
push, scheduling date, event, automatic triggers, etc.

With Power Automate we can create flows that can be Automated, or


Scheduled. Automated flows are triggered by an event, and perform one or
more tasks. Scheduled events perform one or more tasks at a predefined
schedule. The flows can also provide step by step instructions to users to
complete a certain task.
DAY –6:
LEARN MICROSOFT POWER AUTOMATE
Learn how to make flows in automate:

There are many types of flows in power automate from which we can create
i.e:

– Automated Cloud Flow


– Instant Cloud Flow
– Scheduled Cloud Flow
– Desktop Flow
– Business Process Flow
– Process Advisor
● Automated Cloud Flow:
This flow is triggered by a designated event.
● Instant Cloud Flow:
This flow is triggered manually as needed.
● Scheduled Cloud Flow:
This flow is triggered when you choose how often to run.
● Desktop Flow:
These flows can be created on desktop environments on the system.
● Business Process Flow:
This flow guides users through a multistep process.
● Process Advisor:
This flow is made to evaluate and optimize the existing processes and tasks.
There are three main ways to get started creating a flow. The first is start
from a blank where you can create a new flow from scratch. This requires
that you know what you want to do ahead of time including what will trigger
the event, what action will be taken, which applications need to be involved,
etc.. If you have that knowledge, this is the most flexible way to get exactly
what you need.

And so by the end of the day, I have a good idea about how microsoft
power automate works.
DAY – 7:
CREATING CLOUD AUTOMATE FLOWS
Cloud Automate that sends emails from excel :
Created a cloud flow that sends emails , the emails are fetched from the
excel column where the receiver's email is present.

The flow created is manually triggered, which means the flow will only run
when the user clicks on run flow.

In an Excel spreadsheet data is present:

1. Name.
2. Company.
3. Email.
4. Amount.
5. Amount text.

The table is given a name : Invoices

When a flow will be created the email will be fetched form the email column
of the table.
From the List of rows present in a table we will get the location of our excel
file.

Then we will select an application for all actions in which we will send the
email, our email receiver can be set dynamically by selecting the column
name of the excel from the table.

In the body we can send the message we want to send, in which we can set
the name of the receiver dynamically , where the data will be fetched from
the excel sheet.
DAY – 8:
CREATING CLOUD AUTOMATE FLOWS
Flow to save Gmail attachment into drive:
Created a cloud automate flow that stores the attachment received in the
mail into drive.
This flow is a Manually triggered flow which means this flow can only run
when the user clicks on the run button.

This flow gets the unique user email address received for each email from
different email id’s, then the attachment is fetched from that email and stored
into google drive with the attachment name.

In the above image an email will be sent to the receiver with the subject
“image” and with an image attached name:”jswt lockscreen.png”.

The receiver will receive the email and the attached image, the image will be
directly stored onto the drive’s assigned folder with the given attachment
name.

Below is the image of the received attachment.


And so I have successfully created a power automatic flow where, when image or
data arrives in the mail it is directly uploaded to drive.
DAY – 9:
CREATING CLOUD AUTOMATE FLOWS
Flow to save Youtube Link in Excel when a video is uploaded:
Created a cloud automated flow that stores the link of the youtube video
when a video is uploaded.

This flow is a Manually triggered flow which means this flow can only run
when the user clicks on the run button.

This flow gets the unique url to the video when a video is uploaded to the
account.
DAY – 10:
INTRODUCTION TO DESKTOP FLOWS
TASK - 1:
Learn About Desktop Flows:
Desktop flows broaden the existing robotic process automation (RPA)
capabilities in Power Automate and enable you to automate all repetitive
desktop processes. It’s quicker and easier than ever to automate with the new
intuitive Power Automate desktop flow designer using the prebuilt
drag-and-drop actions or recording your own desktop flows to run later.

Leverage automation capabilities in Power Automate. Create flows, interact


with everyday tools such as email and excel or work with modern and legacy
applications. Examples of simple and complex tasks you can automate are:

● Quickly organize your documents using dedicated files and folders actions.

● Accurately extract data from websites and store them in excel files using
Web and Excel automation.
● Apply desktop automation capabilities to put your work on autopilot.

Desktop flows are addressed to home users, small businesses, enterprises or


larger companies. They're addressed essentially to everyone who is
performing simple or complex rule-based tasks on their workstations.

In case you are a home user who is accessing a weather website to see
tomorrow's forecast, or a self-employed businessperson who extracts
information from vendors' invoices or even an employee of a large enterprise
who automates data entry on an ERP system, Power Automate is designed
for you.

It allows you to automate both legacy applications, such as terminal


emulators, modern web and desktop applications, Excel files, and folders.
Interact with the machine using application UI elements, images, or
coordinates.
Windows 11 allows users to create automations through the preinstalled
Power Automate app. Power Automate is a low-code platform that enables
home and business users to optimize their workflows and automate
repetitive and time-consuming tasks.

TASK - 2:
Create A Desktop Flow:

1. To create a desktop flow in Power Automate, open the app and select New
Flow.
2. Enter a name for the desktop flow, and select OK.
3. Create the flow in the flow designer and press Ctrl+S to save the flow. Close
the flow designer and the flow will appear in the console.
DAY – 11:
MAKING AUTOMATE FLOWS WITH DATAVERSE
Overview Of How To Integrate Flow With Dataverse:
With Microsoft Dataverse, you can store and manage data for business
applications and integrate natively with other Microsoft Power Platform
services like Power BI, Power Apps, Power Virtual Agents, and AI Builder
from your cloud flows.

The Microsoft Dataverse connector provides several triggers to start your


flows and many actions that you can use to create or update data in
Dataverse while your flows run. You can use Dataverse actions even if your
flows don't use a trigger from the Dataverse connector.

Use the Microsoft Dataverse connector to create cloud flows that start when
data changes in Dataverse tables and custom messages. For example, you
can send an email whenever a row gets updated in Dataverse.

Overview Of Triggers:
The Microsoft Dataverse connector provides the following triggers to help
you define when your flows start:

● When an action is performed

● a row is created, updated, or deleted

● When a flow step is run from a business process flow


Overview Of Actions:
The Microsoft Dataverse connector provides the following actions to help
you manage data in your flows:

● Create a new row

● Update a row

● Search rows with relevance search

● Get a row

● List rows

● Delete a row

● Relate rows

● Unrelated rows

● Execute a changeset request

● Get file or image content

● Upload file or image content

● Perform a bound action

● Perform an unbound action


DAY – 12:
BUSINESS PROCESS FLOWS
Overview Of Business Process flows:
Business process flows provide a guide for people to get work done. They
provide a streamlined user experience that leads people through the
processes their organization has defined for interactions that need to be
advanced to a conclusion of some kind. This user experience can be tailored
so that people with different security roles can have an experience that best
suits the work they do.

Use business process flows to define a set of steps for people to follow to
take them to a desired outcome. These steps provide a visual indicator that
tells people where they are in the business process. Business process flows
reduce the need for training because new users don’t have to focus on which
table they should be using. They can let the process guide them. You can
configure business process flows to support common sales methodologies
that can help your sales groups achieve better results. For service groups,
business process flows can help new staff get up-to-speed more quickly and
avoid mistakes that could result in unsatisfied customers.

What can Business Process flows do:


With business process flows, you define a set of stages and steps that are
then displayed in a control at the top of the form.

Each stage contains a group of steps. Each step represents a column where
data can be entered. You can advance to the next stage by using the Next
Stage button. In the unified interface, you can work with a business process
flow stage inside the stage flyout or you can pin it to the side pane. Business
process flows don't support expanding the stage flyout to the side pane on
mobile devices.

You can make a step required so that people must enter data for a
corresponding column before they can proceed to the next stage. This is
commonly called ”stage-gating”. If you are adding a business-required or
system-required column to a business process flow stage, we recommend
that you add this column to your form as well.
Business process flows appear relatively simple compared to other types of
processes because they do not provide any conditional business logic or
automation beyond providing the streamlined experience for data entry and
controlling entry into stages. However, when you combine them with other
processes and customizations, they can play an important role in saving
people time, reducing training costs, and increasing user adoption.

When someone creates a new table row, the list of available active business
process definitions is filtered by the user’s security role. The first activated
business process definition available for the user’s security role according to
the process order list is the one applied by default. If more than one active
business process definition is available, users can load another from the
Switch Process dialog. Whenever processes are switched, the one currently
rendered goes to the background and is replaced by the selected one, but it
maintains its state and can be switched back. Each row can have multiple
process instances associated (each for a different business process flow
definition, up to a total of 10). On form load, only one business process flow
is rendered. When any user applies a different process, that process may only
load by default for that particular user.
CONCLUSION :

● Hereby, I have learnt the importance of using low code power automation
processes.
● Successfully created three power automate applications:

a. Using Cloud Flow:


Whatever data is entered in excel will be read and in my cloud automated
flow dynamically set email to the receiver and the data will be dynamically
added inside the body for each row from excel.
b. Using Cloud FLow:
Whenever an email is received, if the email has attachments like images,
audios, videos, pdf documents, word documents, excel documents, etc will
be stored directly in the drive storage of the receiver’s drive with the same
name as the attachment.
c. Using Cloud Flow:
Whenever a youtube video is uploaded from the gmail account of a user, in
the background the flow is running actively that will store the name and url
of the uploaded video into excel sheet.
BIBLIOGRAPHY :

● The following references were used during my learning process:

a. To learn power automate:


Power Automate documentation - Power Automate | Microsoft Docs

b. To learn cloud flows :


Automate tasks by creating a cloud flow - Power Automate | Microsoft Docs

c. To learn data mining concepts, book referred :


Data mining: Concepts and Techniques 3e

d. To learn python :
Python Tutorial (w3schools.com)

THANK YOU!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy