0% found this document useful (0 votes)
24 views131 pages

Areer: A Warm Welcome To Careerera Family

The document discusses exploratory data analysis and how it can be used to gain insights from data. It also discusses advantages and disadvantages of exploratory data analysis and how NumPy and Pandas can be used for data analysis and manipulation in Python.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views131 pages

Areer: A Warm Welcome To Careerera Family

The document discusses exploratory data analysis and how it can be used to gain insights from data. It also discusses advantages and disadvantages of exploratory data analysis and how NumPy and Pandas can be used for data analysis and manipulation in Python.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 131

®

CAREERERA
A Warm Welcome To Careerera Family
CAREERERA
®

Exploratory Data Analysis


WHAT IS EXPLORATORY DATA ANALYSIS?

• Exploratory Data Analysis (EDA) is an approach to analyzing


datasets to summarize their main characteristics, often with
visual methods.
• EDA is used for seeing what the data can tell us before the
modeling task.
HOW CAN EXPLORATORY DATA ANALYSIS
BE USED IN REAL LIFE SITUATION?

• There are dress shoes, hiking boots, sandals, etc. Using EDA, you are
open to the fact that any number of people might buy any number of
different types of shoes.
• You visualize the data using exploratory data analysis to find that
most customers buy 1-3 different types of shoes.
ADVANTAGES OF EDA
1. It gives us valuable insights into the data.
2. It helps us with feature selection
3. Visualization is an effective way of detecting outliers.

DISADVANTAGES OF EDA
1. If not perform properly EDA can misguide a problem.
2. EDA does not effective when we deal with high-dimensional data.
WHAT IS NUMPY?
• NumPy is a Python library used for working with arrays.
• NumPy is a package which contains classes, functions, variables ,
large library of mathematical functions etc. to work with scientific
calculation.
• NumPy can be used to create n dimensional array where n is any
integer . we can create 1 dimensional array , 2 dimensional array, 3
dimensional array and n dimensional array.
WHY USE NumPy?
• In Python we have lists that serve the purpose of arrays, but they are
slow to process.
• NumPy aims to provide an array object that is up to 50x faster than
traditional Python lists.
• The array object in NumPy is called ndarray, it provides a lot of
supporting functions that make working with ndarray very easy.
• Arrays are very frequently used in data science, where speed and
resources are very important.
WHY IS NumPy FASTER THAN LISTS?
• NumPy arrays are stored at one continuous place in memory unlike
lists, so processes can access and manipulate them very efficiently.
• This behavior is called locality of reference in computer science.
• This is the main reason why NumPy is faster than lists. Also it is
optimized to work with latest CPU architectures.
Import NumPy
• There are two ways to import NumPy :-
• import NumPy:- this will import the intire NumPy module.
• from NumPy import*:- this will import all class, objects,
variables etc. from NumPy package. here * means all.
Import NumPy
from NumPy import*
NumPy as np
• NumPy is usually imported under the np alias.
• alias: In Python alias are an alternate name for referring to the same
thing.
Create a NumPy ndarray Object
• NumPy is used to work with arrays. The array object in NumPy is called
ndarray.
• We can create a NumPy ndarray object by using the array() function.
• To create an ndarray, we can pass a list, tuple or any array-like object
into the array() method, and it will be converted into an ndarray
CREATE A NumPy ndarray
OBJECT
DIMENSIONS IN ARRAYS
• A dimension in arrays is one level of array depth (nested arrays).
• nested array: are arrays that have arrays as their elements.
0-D Arrays
1-D Arrays
2-D Arrays
3-D Arrays
0-D Arrays
• 0-D arrays, or Scalars, are the elements in an array. Each value in an array is a 0-D array.
1-D Arrays
• An array that has 0-D arrays as its elements is called uni-dimensional or 1-D array.
• These are the most common and basic arrays.
2-D Arrays
• An array that has 1-D arrays as its elements is called a 2-D array.
• These are often used to represent matrix or 2nd order tensors.
3-D Arrays
• An array that has 2-D arrays (matrices) as its elements is called 3-D array.
• These are often used to represent a 3rd order tensor.
CHECK NUMBER OF
DIMENSIONS?
• NumPy Arrays provides the ndim attribute that returns an integer
that tells us how many dimensions the array have.
HIGHER DIMENSIONAL ARRAYS
• An array can have any number of dimensions.
• When the array is created, you can define the number of dimensions by
using the ndim argument.
NumPy ARRAY INDEXING

• Access Array Elements


• Array indexing is the same as accessing an array element.
• You can access an array element by referring to its index number.
• The indexes in NumPy arrays start with 0, meaning that the first
element has index 0, and the second has index 1 etc.
NumPy ARRAY INDEXING
ACCESS 2-D ARRAYS
• To access elements from 2-D arrays we can use comma separated
integers representing the dimension and the index of the element.
ACCESS 3-D ARRAYS
• To access elements from 3-D arrays we can use comma separated
integers representing the dimensions and the index of the element.
NEGATIVE INDEXING
• Use negative indexing to access an array from the end.
NumPy ARRAY SLICING
• Slicing in python means taking elements from one given index to
another given index.
• We pass slice instead of index like this: [start: end].
• We can also define the step, like this: [start:end:step].
• If we don't pass start its considered 0
• If we don't pass end its considered length of array in that dimension
• If we don't pass step its considered 1
• Note: The result includes the start index, but excludes the end index.
NumPy ARRAY SLICING
NEGATIVE SLICING
• Use the minus operator to refer to an index from the end:
STEP
• Use the step value to determine the step of the slicing:
SLICING 2-D ARRAYS
CHECKING THE DATA TYPE OF AN
ARRAY
• The NumPy array object has a property called dtype that returns the
data type of the array:
NumPy ARRAY COPY VS VIEW
• The main difference between a copy and a view of an array is that the
copy is a new array, and the view is just a view of the original array.
• The copy owns the data and any changes made to the copy will not
affect original array, and any changes made to the original array will
not affect the copy.
• The view does not own the data and any changes made to the view
will affect the original array, and any changes made to the original
array will affect the view.
COPY
• Make a copy, change the original array, and display both arrays:
• The copy SHOULD NOT be affected by the changes made to the
original array.
VIEW
• Make a view, change the original array, and display both arrays.
• The view SHOULD be affected by the changes made to the original array.
CHECK IF ARRAY OWNS IT‘S
DATA
• As mentioned above, copies owns the data, and views does not own the
data, but how can we check this?
• Every NumPy array has the attribute base that returns None if the array
owns the data.
• Otherwise, the base attribute refers to the original object.
NumPy ARRAY SHAPE
• The shape of an array is the number of elements in each dimension.
• NumPy arrays have an attribute called shape that returns a tuple with
each index having the number of corresponding elements.
NumPy ARRAY RESHAPING
• Reshaping means changing the shape of an array.
• The shape of an array is the number of elements in each dimension.
• By reshaping we can add or remove dimensions or change number of
elements in each dimension.
• Reshape From 1-D to 2-D
WHAT IS PANDAS?
• Pandas is a Python library used for working with data sets.
• It has functions for analyzing, cleaning, exploring, and
manipulating data.
• The name "Pandas" has a reference to both "Panel Data",
and "Python Data Analysis" and was created by Wes
McKinney in 2008.
WHY USE PANDAS?
• Pandas allows us to analyze big data and make conclusions
based on statistical theories.
• Pandas can clean messy data sets, and make them readable
and relevant.
• Relevant data is very important in data science.
WHAT CAN PANDAS DO?
• Pandas gives you answers about the data. Like:
• Is there a correlation between two or more columns?
• What is average value?
• Max value? Min value? Pandas are also able to delete rows
that are not relevant, or contains wrong values, like empty
or NULL values. This is called cleaning the data.
Import Pandas
• There are two ways to import pandas
import pandas:-this will import the entire pandas module.
from pandas import*:-this will import all class, objects,
variables etc. from pandas package. here * means all.
import pandas
from pandas import*
WHAT IS SERIES?
• A Pandas Series is like a column in a table.
• It is a one-dimensional array holding data of any type.
WHAT IS LABLES?
• If nothing else is specified, the values are labeled with their index
number. First value has index 0, second value has index 1 etc.
• This label can be used to access a specified value.
CREATE LABLES
• With the index argument, you can name your own labels.
ACCESS VALUES
• When you have created labels, you can access an item by referring
to the label.
KEY/VALUE OBJECTS AS SERIES
• We can also use a key/value object, like a dictionary, when creating a
Series.
• Note: The keys of the dictionary become the labels.
SERIES
• To select only some of the items in the dictionary, use the index
argument and specify only the items you want to include in the
Series.
WHAT IS DATAFRAME?
• Data sets in Pandas are usually multi-dimensional tables, called
DataFrames.
• Series is like a column, a DataFrame is the whole table.
LOCATE ROW
• As you can see from the result above, the DataFrame is like a table
with rows and columns.
• Pandas use the loc attribute to return one or more specified row(s)
• note: When using [], the result is a Pandas DataFrame.
NAMED INDEXES
With the index argument, you can name your own indexes.
LOCATE NAMED INDEXES
• Use the named index in the loc attribute to return the specified
row(s).
PANDAS READ CSV
• A simple way to store big data sets is to use CSV files (comma
separated files).
• CSV files contains plain text and is a well know format that can be
read by everyone including Pandas.
• Tip: use to_string() to print the entire DataFrame. By default, when
you print a DataFrame, you will only get the first 5 rows, and the last
5 rows:
LOAD FILES INTO A DataFrame
If your data sets are stored in a file, Pandas can load them into
a DataFrame.
ANALYZING DataFrame
• By default, when you print a DataFrame, you will only get the first 5
rows, and the last 5 rows:
VIEWING THE DATA
• One of the most used method for getting a quick overview of the
DataFrame, is the head() method.
• The head() method returns the headers and a specified number of
rows, starting from the top.
• Note: if the number of rows is not specified, the head() method will
return the top 5 rows.
VIEWING THE DATA
VIEWING THE DATA
• There is also a tail() method for viewing the last rows of the
DataFrame.
• The tail() method returns the headers and a specified number of
rows, starting from the bottom.
INFO ABOUT THE DATA
• The DataFrames object has a method called info(), that gives you more
information about the data set.
DATA CLEANING
• Data cleaning means fixing bad data in your data set.
• Bad data could be:
Empty cells
Data in wrong format
Wrong data
Duplicates
PANDAS - CLEANING EMPTY CELLS
• Empty cells can potentially give you a wrong result when you analyze
data.
• Remove Rows
• One way to deal with empty cells is to remove rows that contain
empty cells.
• This is usually OK, since data sets can be very big, and removing a few
rows will not have a big impact on the result.
• Note: By default, the dropna() method returns a new DataFrame, and
will not change the original.
• If you want to change the original DataFrame, use the inplace = True
argument:
PANDAS - CLEANING EMPTY
CELLS
PANDAS - CLEANING EMPTY
CELLS
REPLACE EMPTY VALUES
• Another way of dealing with empty cells is to insert a new value
instead.
• This way you do not have to delete entire rows just because of some
empty cells.
• The fillna() method allows us to replace empty cells with a value.
REPLACE EMPTY VALUES
Replace Only For a Specified Columns
• To only replace empty values for one column, specify the column
name for the DataFrame:
Replace Using Mean, Median, or Mode
• A common way to replace empty cells, is to calculate the mean,
median or mode value of the column.
• Pandas uses the mean() median() and mode() methods to calculate
the respective values for a specified column:
• Mean = the average value (the sum of all values divided by number of
values).
• Median = the value in the middle, after you have sorted all values
ascending.
• Mode = the value that appears most frequently.
Replace Using Mean, Median, or Mode
Replace Using Mean, Median, or Mode
Replace Using Mean, Median, or Mode
Pandas-Cleaning Data of Wrong Format
• Cells with data of wrong format can make it difficult, or even
impossible, to analyze data.
• To fix it, you have two options: remove the rows, or convert all
cells in the columns into the same format.
Convert Into a Correct Format
# Convert To DATE
Removing Rows
• The result from the converting in the example above gave us a NaT
value, which can be handled as a NULL value, and we can remove the
row by using the dropna() method.
Pandas - Fixing Wrong Data
• "Wrong data" does not have to be "empty cells" or "wrong format", it
can just be wrong, like if someone registered "199" instead of "1.99".
• Sometimes you can spot wrong data by looking at the data set,
because you have an expectation of what it should be.
• If you take a look at our data set, you can see that in row 7, the
duration is 450, but for all the other rows the duration is between 30
and 60.
• It doesn't have to be wrong, but taking in consideration that this is the
data set of someone's workout sessions, we conclude with the fact
that this person did not work out in 450 minutes.
Replacing Values
• One way to fix wrong values is to replace them with something else.

• For small data sets you might be able to replace the wrong data
one by one, but not for big data sets.
• To replace wrong data for larger data sets you can create some
rules, e.g. set some boundaries for legal values, and replace any
values that are outside of the boundaries.
Replacing Values
• Loop through all values in the "Duration" column.
• If the value is higher than 120, set it to 120:
Removing Rows
• Another way of handling wrong data is to remove the rows that contains
wrong data.
• This way you do not have to find out what to replace them with, and there
is a good chance you do not need them to do your analyses.
Discovering Duplicates

• Duplicate rows are rows that have been registered more than one
time.
• By taking a look at our test data set, we can assume that row 11 and
12 are duplicates.
• To discover duplicates, we can use the duplicated() method.
• The duplicated() method returns a Boolean values for each row:
Discovering Duplicates
Removing Duplicates
• To remove duplicates, use the drop_duplicates() method.
Pandas - Data Correlations
• A great aspect of the Pandas module is the corr() method.
• The corr() method calculates the relationship between each column
in your data set.
Pandas - Data Correlations
• The corr() method ignores "not numeric" columns.
• Result Explained The Result of the corr() method is a table with a lot
of numbers that represents how well the relationship is between two
columns.
• The number varies from -1 to 1.
• 1 means that there is a 1 to 1 relationship (a perfect correlation), and
for this data set, each time a value went up in the first column, the
other one went up as well.
• 0.9 is also a good relationship, and if you increase one value, the
other will probably increase as well.
Pandas - Data Correlations
• -0.9 would be just as good relationship as 0.9, but if you increase one
value, the other will probably go down.
• 0.2 means NOT a good relationship, meaning that if one value goes
up does not mean that the other will.
• What is a good correlation? It depends on the use, but I think it is safe
to say you have to have at least 0.6 (or -0.6) to call it a good
correlation.
• Perfect Correlation:
We can see that "Duration" and "Duration" got the number 1.000000,
which makes sense, each column always has a perfect relationship
with itself.
Pandas - Data Correlations
• Good Correlation:
"Duration" and "Calories" got a 0.922721 correlation, which is a very
good correlation, and we can predict that the longer you work out,
the more calories you burn, and the other way around: if you burned
a lot of calories, you probably had a long work out.
• Bad Correlation:
"Duration" and "Maxpulse" got a 0.009403 correlation, which is a
very bad correlation, meaning that we can not predict the max pulse
by just looking at the duration of the work out, and vice versa.
WHAT IS Matplotlib?
• Matplotlib is a low level graph plotting library in python that serves as
a visualization utility.
• Matplotlib was created by John D. Hunter.
• Matplotlib is open source and we can use it freely.
• Matplotlib is mostly written in python, a few segments are written in
C, Objective-C and JavaScript for Platform compatibility.
Import Matplotlib
• Once Matplotlib is installed, import it in your applications by adding
the import module statement:

• Checking Matplotlib Version


Pyplot
• Most of the Matplotlib utilities lies under the pyplot submodule, and
are usually imported under the plt alias:

• EXAMPLE
• Draw a line in a diagram from position (0,0) to position (6,250):
Pyplot
• EXAMPLE
Plotting x and y points
• The plot() function is used to draw points (markers) in a diagram.
• By default, the plot() function draws a line from point to point.
• The function takes parameters for specifying points in the diagram.
• Parameter 1 is an array containing the points on the x-axis.
• Parameter 2 is an array containing the points on the y-axis.
• If we need to plot a line from (1, 3) to (8, 10), we have to pass two
arrays [1, 8] and [3, 10] to the plot function.
Plotting x and y points
• Example
• Draw a line in a diagram from position (1, 3) to position (8, 10):

The x-axis is the horizontal axis.


The y-axis is the vertical axis.
Plotting Without Line
• To plot only the markers, you can use shortcut string notation parameter
'o', which means 'rings'.
Example
• Draw two points in the diagram, one at position (1, 3) and one in position
(8, 10):
Matplotlib Labels and Title
• With Pyplot, you can use the xlabel() and ylabel() functions to set a label for the x- and y-axis.
• With Pyplot, you can use the title() function to set a title for the plot.
Example
• Add a plot title and labels for the x- and y-axis:
Set Font Properties for Title and Labels
• You can use the fontdict parameter in xlabel(), ylabel(), and title() to set font
properties for the title and labels.
Example
• Set font properties for the title and labels:
Matplotlib Adding Grid Lines
• Add Grid Lines to a Plot
• With Pyplot, you can use the grid() function to add grid lines to the plot.
Matplotlib Subplots
• Display Multiple Plots
• With the subplots() function you can draw multiple plots in one figure:
Matplotlib Scatter
• With Pyplot, you can use the scatter() function to draw a scatter plot.
• The scatter() function plots one dot for each observation. It needs two
arrays of the same length, one for the values of the x-axis, and one for
values on the y-axis:
Compare Plots
There seems to be a relationship between speed and age, but what if we plot the
observations from another day as well? Will the scatter plot tell us something else?
Matplotlib Bars
• Creating Bars
• With Pyplot, you can use the bar() function to draw bar graphs:
• The bar() function takes arguments that describes the layout of the bars.
• The categories and their values represented by the first and second argument
as arrays.
Horizontal Bars
• If you want the bars to be displayed horizontally instead of vertically,
use the barh() function:
Matplotlib Histograms
• A histogram is a graph showing frequency distributions.
• It is a graph showing the number of observations within each given interval.
• Example: Say you ask for the height of 250 people, you might end up with a
histogram like this:
Matplotlib Pie Charts
• With Pyplot, you can use the pie() function to draw pie charts:
SEABORN
• In the world of Analytics, the best way to get insights is by visualizing the data. Data
can be visualized by representing it as plots which is easy to understand, explore
and grasp. Such data helps in drawing the attention of key elements.
• To analyse a set of data using Python, we make use of Matplotlib, a widely
implemented 2D plotting library. Likewise, Seaborn is a visualization library in
Python. It is built on top of Matplotlib.
• Seaborn is an amazing data visualization library for statistical graphics plotting in
Python.
• It provides beautiful default styles and color palettes to make statistical plots more
attractive.
• It is built on the top of the matplotlib library and also closely integrated to the data
structures from pandas.
Seaborn Vs Matplotlib
• It is summarized that if Matplotlib “tries to make easy things easy and
hard things possible”, Seaborn tries to make a well-defined set of hard
things easy too.”
• Seaborn helps resolve the two major problems faced by Matplotlib;
the problems are −
• Default Matplotlib parameters
• Working with data frames
• As Seaborn compliments and extends Matplotlib, the learning curve is
quite gradual. If you know Matplotlib, you are already half way through
Seaborn.
Important Features of Seaborn
• Seaborn is built on top of Python’s core visualization library Matplotlib. It is meant
to serve as a complement, and not a replacement. However, Seaborn comes with
some very important features. Let us see a few of them here. The features help in

• Built in themes for styling matplotlib graphics
• Visualizing univariate and bivariate data
• Fitting in and visualizing linear regression models
• Plotting statistical time series data
• Seaborn works well with NumPy and Pandas data structures
• It comes with built in themes for styling Matplotlib graphics
• In most cases, you will still use Matplotlib for simple plotting. The knowledge of
Matplotlib is recommended to tweak Seaborn’s default plots.
Installing Seaborn and getting started
• Using Pip Installer
• To install the latest release of Seaborn, you can use pip −
IMPORT SEABORN

• We will import the Seaborn library with the following command −


Color Palettes
• Qualitative Color Palettes
Qualitative or categorical palettes are best suitable to plot the categorical
data.
Color Palettes
• Sequential Color Palettes
Sequential plots are suitable to express the distribution of data
ranging from relative lower values to higher values within a range.
Appending an additional character ‘s’ to the color passed to the color
parameter will plot the Sequential plot.
Color Palettes
• Diverging Color Palette
Diverging palettes use two different colors. Each color represents variation
in the value ranging from a common point in either direction.
Assume plotting the data ranging from -1 to 1. The values from -1 to 0 takes
one color and 0 to +1 takes another color.
By default, the values are centered from zero. You can control it with
parameter center by passing a value.
Color Palettes
• Setting the Default Color Palette
 The functions color_palette() has a companion called set_palette() The
relationship between them is similar to the pairs covered in the aesthetics
chapter. The arguments are same for both set_palette() and color_palette(),
but the default Matplotlib parameters are changed so that the palette is
used for all plots.
Histogram
• Histograms represent the data distribution by forming bins along the
range of the data and then drawing bars to show the number of
observations that fall in each bin.
• Seaborn comes with some datasets and we have used few datasets in
our previous chapters. We have learnt how to load the dataset and
how to lookup the list of available datasets.
• Seaborn comes with some datasets and we have used few datasets in
our previous chapters. We have learnt how to load the dataset and
how to lookup the list of available datasets.
Histogram
Kernel Density Estimates
• Kernel Density Estimation (KDE) is a way to estimate the probability
density function of a continuous random variable. It is used for non-
parametric analysis.
• Setting the hist flag to False in distplot will yield the kernel density
estimation plot.
Fitting Parametric Distribution
• distplot() is used to visualize the parametric distribution of a dataset.
Scatter Plot
• Scatter plot is the most convenient way to visualize the distribution
where each observation is represented in two-dimensional plot via x and
y axis.
Hexbin Plot
• Hexagonal binning is used in bivariate data analysis when the data is sparse
in density i.e., when the data is very scattered and difficult to analyze
through scatterplots.
• An addition parameter called ‘kind’ and value ‘hex’ plots the hexbin plot.
Visualizing Pairwise Relationship
• Datasets under real-time study contain many variables. In such cases,
the relation between each and every variable should be analyzed.
Plotting Bivariate Distribution for (n,2) combinations will be a very
complex and time taking process.
• To plot multiple pairwise bivariate distributions in a dataset, you can
use the pairplot() function. This shows the relationship for (n,2)
combination of variable in a DataFrame as a matrix of plots and the
diagonal plots are the univariate plots.
Visualizing Pairwise Relationship
Plotting Categorical Data
• stripplot()
stripplot() is used when one of the variable under study is categorical.
It represents the data in sorted order along any one of the axis.
Plotting Categorical Data
• Swarmplot()
Another option which can be used as an alternate to ‘Jitter’ is
function swarmplot(). This function positions each point of scatter
plot on the categorical axis and thereby avoids overlapping points −
Bar Plot
• The barplot() shows the relation between a categorical variable and a
continuous variable. The data is represented in rectangular bars
where the length the bar represents the proportion of the data in
that category.
Point Plots
• Point plots serve same as bar plots but in a different style. Rather than
the full bar, the value of the estimate is represented by the point at a
certain height on the other axis
Box Plots
• Boxplot is a convenient way to visualize the distribution of data
through their quartiles.
• Box plots usually have vertical lines extending from the boxes which
are termed as whiskers. These whiskers indicate variability outside
the upper and lower quartiles, hence Box Plots are also termed as
box-and-whisker plot and box-and-whisker diagram. Any Outliers in
the data are plotted as individual points.
Box Plots
Violin Plots
• Violin Plots are a combination of the box plot with the kernel density
estimates. So, these plots are easier to analyze and understand the
distribution of the data.
Linear Relationships
• Most of the times, we use datasets that contain multiple quantitative
variables, and the goal of an analysis is often to relate those variables
to each other. This can be done through the regression lines.
• While building the regression models, we often check for
multicollinearity, where we had to see the correlation between all the
combinations of continuous variables and will take necessary action
to remove multicollinearity if exists. In such cases, the following
techniques helps.
Functions to Draw Linear
Regression Models
• There are two main functions in Seaborn to visualize a linear
relationship determined through regression. These functions are
regplot() and lmplot().
• regplot
accepts the x and y variables in a variety of formats including simple
NumPy arrays, pandas Series objects, or as references to variables in
a pandas DataFrame
• lmplot
has data as a required parameter and the x and y variables must be
specified as strings. This data format is called “long-form” data
regplot
lmplot
THANK YOU !!!
(USA) (INDIA)
2-Industrial Park Drive, E-Waldorf, MD,
20602,
B-44, Sector-59,
United States Noida Uttar
Pradesh 201301
(USA)
(INDIA)
+1-844-889-4054
+91-92-5000-4000
(Singapore)
3 Temasek Avenue, Singapore 039190
info@careerera.com
www.careerera.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy