Automotive Basics
Automotive Basics
Main menu
Skip to content
Home
AUTOSAR BASICS
ISO26262
Webinars
Automotive Interview Questions
About
4. The Controller Area Network (CAN) bus is a serial asynchronous bus used in
instrumentation applications for industries such as automobiles.
USES:
– More reliably, e.g., fewer plug-in connectors that might cause
errors.
– Wiring less complicated, more economic.
– Easy to implement, changes, too.
– Additional elements (e.g., control units) are easy to integrate.
– Installation place exchangeable without electric problems.
– Wire may be diagnosed.
Types of errors – ACK error, Bit error, Stuff error, Form error, CRC error
Assuming all three see the bus is idle and begin transmitting at the same time, this is how
the arbitration works out. All three devices will drive the bus to a dominant state for the
start-of-frame (SOF) and the two most significant bits of each message identifier. Each
device will monitor the bus and determine success. When they write bit 8 of the message ID,
the device writing message ID 433 will notice that the bus is in the dominant state when it
was trying to let it be recessive, so it will assume a collision and give up for now. The
remaining devices will continue writing bits until bit 5, then the device writing message ID
187 will notice a collision and abort transmission. This leaves the device writing message ID
154 remaining. It will continue writing bits on the bus until complete or an error is detected.
Notice that this method of arbitration will always cause the lowest numerical value message
ID to have priority. This same method of bit-wise arbitration and prioritization applies to the
18-bit extension in the extended format as well.
The can transceiver is a device which detects the signal levels that are used on the CAN bus
to the logical signal levels recognized by a microcontroller.
– Hard synchronization is done with a falling edge on the bus while the bus is idle, which
is interpreted as a Start of frame (SOF). It restarts the internal Bit Time Logic.
– Soft synchronization is used to lengthen or shorten a bit time while a CAN frame is
received.
17. What is the difference between function and physical addressing?
Answer: Functional addressing is an addressing scheme that labels messages based
upon their operation code or content. Physical addressing is an addressing scheme that
labels messages based upon the physical address location of their source and/or
destination(s).
18. What happens if I have to send more than 8-bytes of data?
Answer: The J1939 standard has defined a method of communicating more than 8 bytes of
data by sending the data in packets as specified in the Transport Protocol (TP). There are
two types of TP, one for broadcasting the data, and the other for sending it to a specific
address.
DTC consists of 4 components – SPN, FMI, OC and CM.
A DTC is a combination of four independent fields: the Suspect Parameter Number (SPN) of
the channel or feature that can have faults; a Failure Mode Identifier (FMI) of the specific
fault; the occurrence count (OC) of the SPN/FMI combination; and the SPN conversion
method (CM) which tells the receiving mode how to interpret the SPN. Together, the SPN,
FMI, OC and CM form a number that a diagnostic tool can use to understand the failure that
is being reported.
19. What is KWP2000?
Answer: KWP 2000(ISO14230) is a Diagnostic communications standard. Specifies possible
system configurations using the K & L lines. As 9141-2 but limited to the physical
characteristics. Specifies possible system configurations using the K & L lines.
5 Baud wake up as 9141- 2
New fast initialisation method
20. What is OBDII?
Answer: On-Board Diagnostics in an automotive context is a generic term referring to a
vehicle’s self-diagnostic and reporting capability
21. Why Diagnostic Standards?
Answer: As systems got more complex the link between cause and symptom became less
obvious. This meant that electronic systems had to have some level of self diagnosis and to
communicate to the outside world. Initially many systems used their own protocols which
meant that garages had to have a large number of tools – even to diagnose a single vehicle.
22. What is meant by verification and validation??
Answer: Verification and Validation (V&V) is the process of checking that a software system
meets specifications and that it fulfills its intended purpose. It is normally part of the
software testing process of a project.
According to the Capability Maturity Model (CMMI-SW v1.1),
The Synchronization Segment Sync_Seg is that part of the bit time where edges of the CAN
bus level are expected to occur; the distance between an edge that occurs outside of
Sync_Seg and the Sync_Seg is called the phase error of that edge. The Propagation Time
Segment Prop_Seg is intended to compensate for the physical delay times within the CAN
network. The Phase Buffer Segments Phase_Seg1 and Phase_Seg2 surround the Sample
Point. The (Re-)Synchronization Jump Width (SJW) defines how far a resynchronization may
move the Sample Point inside the limits defined by the Phase Buffer Segments to
compensate for edge phase errors.
Two types of synchronization exist : Hard Synchronization and Resynchronization. A Hard
Synchronization is done once at the start of a frame; inside a frame only Resynchronizations
occur.
• Hard Synchronization After a hard synchronization, the bit time is restarted with the
end of Sync_Seg, regardless of the edge phase error. Thus hard synchronization forces the
edge which has caused the hard synchronization to lie within the synchronization segment
of the restarted bit time.
• Bit Resynchronization Resynchronization leads to a shortening or lengthening of the bit
time such that the position of the sample point is shifted with regard to the edge.
26. Formula for Baudrate calculation?
The baud rate is calculated as:
baud rate (bits per second) = 18.432 x 10^6 / BRP / (1 + TSEG1 + TSEG2)
27. What happen when two CAN nodes are sending same identifier at a same
time?
Two nodes on the network are not allowed to send messages with the same id. If two nodes
try to send a message with the same id at the same time arbitration will not work. Instead,
one of the transmitting nodes will detect that his message is distorted outside of the
arbitration field. The nodes will then use the error handling of CAN, which in this case
ultimately will lead to one of the transmitting node being switched off (bus-off mode).
28. what is the difference between Bit Rate and Baud Rate?
The difference between Bit and Baud rate is complicated and intertwining. Both are
dependent and inter-related. But the simplest explanation is that a Bit Rate is how many
data bits are transmitted per second. A baud Rate is the number of times per second a
signal in a communications channelchanges.Bit rates measure the number of data bits
(that is 0′s and 1′s) transmitted in one second in a communication channel. A figure of 2400
bits per second means 2400 zeros or ones can be transmitted in one second, hence the
abbreviation “bps.” Individual characters (for example letters or numbers) that are also
referred to as bytes are composed of several bits.A baud rate is the number of times a signal
in a communications channel changes state or varies. For example, a 2400 baud rate means
that the channel can change states up to 2400 times per second. The term “change state”
means that it can change from 0 to 1 or from 1 to 0 up to X (in this case, 2400) times per
second. It also refers to the actual state of the connection, such as voltage, frequency, or
phase level).The main difference between the
two is that one change of state can transmit
one bit, or slightly more or less than one bit,
that depends on the modulation technique
used. So the bit rate (bps) and baud rate (baud
per second) have this connection:bps = baud
per second x the number of bit per baudThe
modulation technique determines the number
of bit per baud. Here are two examples:When
FSK (Frequency Shift Keying, a transmission
technique) is used, each baud transmits one bit.
Only one change in state is required to send a
bit. Thus, the modem’s bps rate is equal to the baud rate. When a baud rate of 2400 is used,
a modulation technique called phase modulation that transmits four bits per baud is used.
So:2400 baud x 4 bits per baud = 9600 bpsSuch modems are capable of 9600 bps
operation.
C Interview Questions :
1. What is the difference between declaration and definition?
Answer: Definition means where a variable or function is defined in reality and actual
memory is allocated for variable or function.
Declaration means just giving a reference of a variable and function.
2. What are the different storage classes in C?
Answer: AUTO, STATIC, EXTERN, REGISTER
auto is the default storage class for local variables.
{
int Count;
register is used to define local variables that should be stored in a register instead of RAM
This means that the variable has a maximum size equal to the register size (usually one
word) and cannot have the unary ‘&’ operator applied to it (as it does not have a memory
location).
{
}
Register should only be used for variables that require quick access – such as counters. It
should also be noted that defining ‘register’ goes not mean that the variable will be stored in
a register. It means that it MIGHT be stored in a register – depending on hardware and
implementation restrictions.
There is one very important use for ‘static’. Consider this bit of code.
——– ——–
{ {
printf(“count is %d\n”, count); write();
} }
Count in ‘source 1′ will have a value of 5. If source 1 changes the value of count – source 2
will see the new value. Here are some example source files.
3. What is interrupt?
Answer: Interrupts (also known as traps or exceptions in some processors) are a technique
of diverting the processor from the execution of the current program so that it may deal with
some event that has occurred. Such an event may be an error from a peripheral, or simply
that an I/O device has finished the last task it was given and is now ready for another. An
interrupt is generated in your computer every time you type a key or move the mouse. You
can think of it as a hardware-generated function call.
4. What is Hardware Interrupt?
Answer: There are two ways of telling when an I/O device (such as a serial controller or a
disk controller) is ready for the next sequence of data to be transferred. The first is busy
waiting or polling, where the processor continuously checks the device’s status register until
the device is ready. This wastes the processor’s time but is the simplest to implement. For
some time-critical applications, polling can reduce the time it takes for the processor to
respond to a change of state in a peripheral.
5. What is Software Interrupt?
Answer: A software interrupt is generated by an instruction. It is the lowest-priority
interrupt and is generally used by programs to request a service to be performed by the
system software (operating system or firmware).
Difference between Hardware Interrupt and Software Interrupt
An interrupt is a special signal that causes the computer’s central processing unit to
suspend what it is doing and transfers its control to a special program called an interrupt
handler. The responsibility of an interrupt handler is to determine what caused the
interrupt, service the interrupt and then return the control to the point from where the
interrupt was caused. The difference between hardware interrupt and software
interrupt is as below:
Hardware Interrupt: This interrupt is caused by some external device such as request to
start an I/O or occurrence of a hardware failure.
Software Interrupt: This interrupt can be invoked with the help of INT instruction. A
programmer triggered this event that immediately stops execution of the program and
passes execution over to the INT handler. The INT handler is usually a part of the operating
system and determines the action to be taken e.g. output to the screen, execute file etc.
Thus a software interrupt as it’s name suggests is driven by a software instruction and a
hardware interrupt is the result of external causes.
By connecting one input of the oscilloscope (or logic state analyzer) to the INTR pin of the
microprocessor and the second one to the port you activate/deactivate, you can measure
the latency time and the duration of the ISR
Use of caches
At higher clock speeds, caches are useful as the memory speed is proportionally
slower. Harvard architectures tend to be targeted at higher performance systems, and so
caches are nearly always used in such systems.
Von Neumann architectures usually have a single unified cache, which stores both
instructions and data. The proportion of each in the cache is variable, which may be a good
thing. It would in principle be possible to have separate instruction and data caches, storing
data and instructions separately. This probably would not be very useful as it would only be
possible to ever access one cache at a time.
Caches for Harvard architectures are very useful. Such a system would have separate
caches for each bus. Trying to use a shared cache on a Harvard architecture would be very
inefficient since then only one bus can be fed at a time. Having two caches means it is
possible to feed both buses simultaneously….exactly what is necessary for a Harvard
architecture.
This also allows to have a very simple unified memory system, using the same address
space for both instructions and data. This gets around the problem of literal pools and self
modifying code. What it does mean, however, is that when starting with empty caches, it is
necessary to fetch instructions and data from the single memory system, at the same time.
Obviously, two memory accesses are needed therefore before the core has all the data
needed. This performance will be no better than a von Neumann architecture. However, as
the caches fill up, it is much more likely that the instruction or data value has already been
cached, and so only one of the two has to be fetched from memory. The other can be
supplied directly from the cache with no additional delay. The best performance is achieved
when both instructions and data are supplied by the caches, with no need to access external
memory at all.
This is the most sensible compromise and the architecture used by ARMs Harvard processor
cores. Two separate memory systems can perform better, but would be difficult to
implement
The routines in ROM test the central hardware, search for video ROM, perform a
checksum on the video ROM and executes the routines in video ROM.
The routines in the mother board ROM then continue searching for any ROM,
checksum and executes these routines.
After performing the POST (Power On-Self Test) is executed. The system will search
for a boot device.
Assuming that the valid boot device is found, IO.SYS is loaded into memory and
executed.IO.SYS consists primarily of initialization code and extension to the
memory board ROM BIOS.
MSDOS.SYS is loaded into memory and executed. MSDOS.SYS contains the DOS
routines.
CONFIG.SYS (created and modified by the user. load additional device drivers for
peripheral devices), COMMAND.COM (It is command interpreter- It translates the
commands entered by the user. It also contains internal DOS commands. It executes
and AUTOEXEC.BAT),AUTOEXEC.BAT (It contains all the commands that the
user wants which are executed automatically every time the computed is started).
11. What Little-Endian and Big-Endian? How can I determine whether a machine’s
byte order is big-endian or little endian? How can we convert from one to
another?
First of all, Do you know what Little-Endian and Big-Endian mean?
Little Endian means that the lower order byte of the number is stored in memory at the
lowest address, and the higher order byte is stored at the highest address. That is, the little
end comes first.
For example, a 4 byte, 32-bit integerByte3 Byte2 Byte1 Byte0will be arranged in memory as
follows:Base_Address+0 Byte0
Base_Address+1 Byte1
Base_Address+2 Byte2
Base_Address+3 Byte3Intel processors use “Little Endian” byte order.”Big Endian” means
that the higher order byte of the number is stored in memory at the lowest address, and the
lower order byte at the highest address. The big end comes first.Base_Address+0 Byte3
Base_Address+1 Byte2
Base_Address+2 Byte1
Base_Address+3 Byte0Motorola, Solaris processors use “Big Endian” byte order.In “Little
Endian” form, code which picks up a 1, 2, 4, or longer byte number proceed in the same way
for all formats. They first pick up the lowest order byte at offset 0 and proceed from there.
Also, because of the 1:1 relationship between address offset and byte number (offset 0 is
byte 0), multiple precision mathematic routines are easy to code. In “Big Endian” form, since
the high-order byte comes first, the code can test whether the number is positive or
negative by looking at the byte at offset zero. Its not required to know how long the number
is, nor does the code have to skip over any bytes to find the byte containing the sign
information. The numbers are also stored in the order in which they are printed out, so
binary to decimal routines are particularly efficient. Here is some code to determine what is
the type of your machine.
int num = 1;
if(*(char *)&num == 1)
{
printf(“\nLittle-Endian\n”);
}
else
{
printf(“Big-Endian\n”);
}And here is some code to convert from one Endian to another.int myreversefunc(int
num)
{
int byte0, byte1, byte2, byte3;byte0 = (num & x000000FF) >> 0 ;
byte1 = (num & x0000FF00) >> 8 ;
byte2 = (num & x00FF0000) >> 16 ;
byte3 = (num & xFF000000) >> 24 ;return((byte0 << 24) | (byte1 << 16) | (byte2 << 8) |
(byte3 << 0));
}
12. Program to find if a machine is big endian or little endian?
01 #include "stdio.h"
02 #define BIG_ENDIAN 0
03 #define LITTLE_ENDIAN 1
04 int main()
05 {
06 int value;
07 value = endian();
08 if (value == 1)
09 printf("Machine is little endian\n",value);
10 else
11 printf("Machine is Big Endian\n",value);
12 }
13 int endian() {
14 short int word = 0x0001;
15 char *byte = (char *) &word;
16 return (byte[0] ? LITTLE_ENDIAN : BIG_ENDIAN);
17 }
13. Swap 2 variables without using temporary variable!
a=a+b
b=a–b
a=a–b
14. Write a program to generate the Fibonacci Series?
#include<stdio.h>
#include<conio.h>
main()
{
int n,i,c,a=0,b=1;
printf(“Enter Fibonacci series of nth term : “);
scanf(“%d”,&n);
printf(“%d %d “,a,b);
for(i=0;i<=(n-3);i++)
{
c=a+b;
a=b;
b=c;
printf(“%d “,c);
}
getch();
}Output :
Enter Fibonacci series of nth term : 7
0 1 1 2 3 5 8
15. Write a program to find unique numbers in an array?
Answer:
for (i=1;i<=array.length;i++) {
found=false;
for (k=i+1;k<=array.length;k++) {
if (array[i]==array[k]) found=true;
}
if (!found) println(array[i]);
}
16. Write a C program to print Equilateral Triangle using numbers?
01 /* C program to print Equilateral Triangle*/
02 #include<stdio.h>
03 main()
04 {
05 int i,j,k,n;
06
07 printf("Enter number of rows of the triangle \n");
08 scanf("%d",&n);
09
10 for(i=1;i<=n;i++)
11 {
12 for(j=1;j<=n-i;j++)
13 {
14 printf(" ");
15 }
16 for(k=1;k<=(2*i)-1;k++)
17 {
18 printf("i");
19 }
20 printf("\n");
21 }
22 getch();
17. Write a program for deletion and insertion of a node in single linked list?
#include<stdio.h>
#include<stdlib.h>
typedef struct Node
{
int data;
struct Node *next;
}node;
void insert(node *pointer, int data)
{
/* Iterate through the list till we encounter the last node.*/
while(pointer->next!=NULL)
{
pointer = pointer -> next;
}
/* Allocate memory for the new node and put data in it.*/
pointer->next = (node *)malloc(sizeof(node));
pointer = pointer->next;
pointer->data = data;
pointer->next = NULL;
}
int find(node *pointer, int key)
{
pointer = pointer -> next; //First node is dummy node.
/* Iterate through the entire linked list and search for the key. */
while(pointer!=NULL)
{
if(pointer->data == key) //key is found.
{
return 1;
}
pointer = pointer -> next;//Search in the next node.
}
/*Key is not found */
return 0;
}
void delete(node *pointer, int data)
{
/* Go to the node for which the node next to it has to be deleted */
while(pointer->next!=NULL && (pointer->next)->data != data)
{
pointer = pointer -> next;
}
if(pointer->next==NULL)
{
printf(“Element %d is not present in the list\n”,data);
return;
}
/* Now pointer points to a node and the node next to it has to be removed */
node *temp;
temp = pointer -> next;
/*temp points to the node which has to be removed*/
pointer->next = temp->next;
/*We removed the node which is next to the pointer (which is also temp) */
free(temp);
/* Beacuse we deleted the node, we no longer require the memory used for it .
free() will deallocate the memory.
*/
return;
}
void print(node *pointer)
{
if(pointer==NULL)
{
return;
}
printf(“%d “,pointer->data);
print(pointer->next);
}
int main()
{
/* start always points to the first node of the linked list.
temp is used to point to the last node of the linked list.*/
node *start,*temp;
start = (node *)malloc(sizeof(node));
temp = start;
temp -> next = NULL;
/* Here in this code, we take the first node as a dummy node.
The first node does not contain data, but it used because to avoid handling special cases
in insert and delete functions.
*/
printf(“1. Insert\n”);
printf(“2. Delete\n”);
printf(“3. Print\n”);
printf(“4. Find\n”);
while(1)
{
int query;
scanf(“%d”,&query);
if(query==1)
{
int data;
scanf(“%d”,&data);
insert(start,data);
}
else if(query==2)
{
int data;
scanf(“%d”,&data);
delete(start,data);
}
else if(query==3)
{
printf(“The list is “);
print(start->next);
printf(“\n”);
}
else if(query==4)
{
int data;
scanf(“%d”,&data);
int status = find(start,data);
if(status)
{
printf(“Element Found\n”);
}
else
{
printf(“Element Not Found\n”); }
}
}}
18. Can a variable be both const and volatile? Yes. The const modifier means that
this code cannot change the value of the variable, but that does not mean that the value
cannot be changed by means outside this code. For instance, in the example in FAQ 8, the
timer structure was accessed through a volatile const pointer. The function itself did not
change the value of the timer, so it was declared const. However, the value was changed by
hardware on the computer, so it was declared volatile. If a variable is both const and
volatile, the two modifiers can appear in either order.
19. what are Constant and Volatile Qualifiers?
const
constis used with a datatype declaration or definition to specify an unchanging value
Examples:
const int five = 5;
pi = 3.2;
five = 6;
volatile
volatilespecifies a variable whose value may be changed by processes outside the
current program
One example of a volatileobject might be a buffer used to exchange data with an
external device:
int
check_iobuf(void)
{
int val;
while (iobuf == 0) {
}
val = iobuf;
iobuf = 0;
return(val);
if iobuf had not been declared volatile, the compiler would notice that nothing
happens inside the loop and thus eliminate the loop
const and volatile can be used together
An input-only buffer for an external device could be declared as const
volatile (or volatile const, order is not important) to make sure the
compiler knows that the variable should not be changed (because it is input-
only) and that its value may be altered by processes other than the current
program
The keywords const and volatile can be applied to any declaration, including those of
structures, unions, enumerated types or typedef names. Applying them to a declaration is
called qualifying the declaration—that’s why const and volatile are called type qualifiers,
rather than type specifiers. Here are a few representative examples:
volatile i;
volatile int j;
const long q;
struct{
}volatile vs;
Don’t be put off; some of them are deliberately complicated: what they mean will be
explained later. Remember that they could also be further complicated by introducing
storage class specifications as well! In fact, the truly spectacular
Const
Let’s look at what is meant when const is used. It’s really quite simple: const means that
something is not modifiable, so a data object that is declared with const as a part of its type
specification must not be assigned to in any way during the run of a program. It is very likely
that the definition of the object will contain an initializer (otherwise, since you can’t assign to
it, how would it ever get a value?), but this is not always the case. For example, if you were
accessing a hardware port at a fixed memory address and promised only to read from it,
then it would be declared to be const but not initialized.
Taking the address of a data object of a type which isn’t const and putting it into a pointer to
the const-qualified version of the same type is both safe and explicitly permitted; you will be
able to use the pointer to inspect the object, but not modify it. Putting the address of a const
type into a pointer to the unqualified type is much more dangerous and consequently
prohibited (although you can get around this by using a cast). Here is an example:
#include <stdio.h>
#include <stdlib.h>
main(){
int i;
int *ncpi;
cpi = &ci;
ncpi = &i;
/*
* this is allowed
*/
cpi = ncpi;
/*
*/
/*
*/
*ncpi = 0;
exit(EXIT_SUCCESS);
Example 8.3
As the example shows, it is possible to take the address of a constant object, generate a
pointer to a non-constant, then use the new pointer. This is an error in your program and
results in undefined behaviour.
The main intention of introducing const objects was to allow them to be put into read-only
store, and to permit compilers to do extra consistency checking in a program. Unless you
defeat the intent by doing naughty things with pointers, a compiler is able to check
that const objects are not modified explicitly by the user.
An interesting extra feature pops up now. What does this mean?
char c;
which means that now cp is an ordinary, modifiable pointer, but the thing that it points to
must not be modified. So, depending on what you choose to do, both the pointer and the
thing it points to may be modifiable or not; just choose the appropriate declaration.
Volatile
After const, we treat volatile. The reason for having this type qualifier is mainly to do with
the problems that are encountered in real-time or embedded systems programming using C.
Imagine that you are writing code that controls a hardware device by placing appropriate
values in hardware registers at known absolute addresses.
Let’s imagine that the device has two registers, each 16 bits long, at ascending memory
addresses; the first one is the control and status register (csr) and the second is a data port.
The traditional way of accessing such a device is like this:
/*
* is implementation dependent
*/
struct devregs{
};
/* bit patterns in the csr */
#define NDEVS 4
/*
*/
return(0xffff);
dvp->csr = RESET;
return(0xffff);
Example 8.4
The technique of using a structure declaration to describe the device register layout and
names is very common practice. Notice that there aren’t actually any objects of that type
defined, so the declaration simply indicates the structure without using up any store.
To access the device registers, an appropriately cast constant is used as if it were pointing
to such a structure, but of course it points to memory addresses instead.
However, a major problem with previous C compilers would be in the while loop which tests
the status register and waits for the ERROR or READY bit to come on. Any self-respecting
optimizing compiler would notice that the loop tests the same memory address over and
over again. It would almost certainly arrange to reference memory once only, and copy the
value into a hardware register, thus speeding up the loop. This is, of course, exactly what we
don’t want; this is one of the few places where we must look at the place where the pointer
points, every time around the loop.
Because of this problem, most C compilers have been unable to make that sort of
optimization in the past. To remove the problem (and other similar ones to do with when to
write to where a pointer points), the keyword volatile was introduced. It tells the compiler
that the object is subject to sudden change for reasons which cannot be predicted from a
study of the program itself, and forces every reference to such an object to be a genuine
reference.
Here is how you would rewrite the example, making use of const and volatile to get what
you want.
/*
* is implementation dependent
*/
struct devregs{
};
#define NDEVS 4
/*
*/
return(0xffff);
dvp->csr = RESET;
return(0xffff);
Example 8.5
The rules about mixing volatile and regular types resemble those for const. A pointer to
a volatile object can be assigned the address of a regular object with safety, but it is
dangerous (and needs a cast) to take the address of a volatile object and put it into a
pointer to a regular object. Using such a derived pointer results in undefined behaviour.
If an array, union or structure is declared with const or volatile attributes, then all of the
members take on that attribute too. This makes sense when you think about it—how could a
member of a const structure be modifiable?
That means that an alternative rewrite of the last example would be possible. Instead of
declaring the device registers to be volatile in the structure, the pointer could have been
declared to point to a volatile structure instead, like this:
struct devregs{
};
Now, just when you thought that you understood all that, here comes the final twist. A
declaration like this:
/* stuff */
}v_decl;
/* stuff */
}volatile v_decl;
If you do want to get a shorthand way of attaching a qualifier to another type, you can
use typedef to do it:
struct x{
int a;
};
csx const_sx;
struct x non_const_sx = {1};
{ {
---------- ----------
---------- ----------
}struct_var_nm; }union_var_nm;
vi. Example
struct item_mst union item_mst
{ {
}it; }it;
Lets say a structure containing an int,char and float is created and a union containing int
char float are declared.
struct TT{ int a; float b; char c; } Union UU{ int a; float b; char c; }
sizeof TT(struct) would be >9 bytes (compiler dependent-if int,float, char are taken as 4,4,1)
sizeof UU(Union) would be 4 bytes as supposed from above.If a variable in double exists in
union then the size of union and struct would be 8 bytes and cumulative size of all variables
in struct.
Detailed Example:
struct foo
{
char c;
long l;
char *p;
};
union bar
{
char c;
long l;
char *p;
};
A struct foo contains all of the elements c, l, and p. Each element is separate and distinct.
A union bar contains only one of the elements c, l, and p at any given time. Each element is
stored in the same memory location (well, they all
start at the same memory location), and you can only refer to the element which was last
stored. (ie: after “barptr->c = 2;” you cannot reference
any of the other elements, such as “barptr->p” without invoking undefined behavior.)
Try the following program. (Yes, I know it invokes the above-mentioned “undefined
behavior”, but most likely will give some sort of output on most computers.)
==========
#include
struct foo
{
char c;
long l;
char *p;
};
union bar
{
char c;
long l;
char *p;
};
mybar.c = 1;
mybar.l = 2L;
mybar.p = “This is mybar”;
return 0;
}
==========
On my system, I get:
char variables can be byte aligned and appear at any byte boundary
short (2 byte) variables must be 2 byte aligned, they can appear at any even byte boundary.
This means that 0x10004567 is not a valid location for a short variable but 0x10004566 is.
long (4 byte) variables must be 4 byte aligned, they can only appear at byte boundaries that
are a multiple of 4 bytes. This means that 0x10004566 is not a valid location for a long
variable but 0x10004568 is.
Structure padding occurs because the members of the structure must appear at the correct
byte boundary, to achieve this the compiler puts in padding bytes (or bits if bit fields are in
use) so that the structure members appear in the correct location. Additionally the size of
the structure must be such that in an array of the structures all the structures are correctly
aligned in memory so there may be padding bytes at the end of the structure too
struct example {
char c1;
short s1;
char c2;
long l1;
char c3;
}
In this structure, assuming the alignment scheme I have previously stated then
c1 can appear at any byte boundary, however s1 must appear at a 2 byte boundary so there
is a padding byte between c1 and s1.
c2 can then appear in the available memory location, however l1 must be at a 4 byte
boundary so there are 3 padding bytes between c2 and l1
c3 then appear in the available memory location, however because the structure contains a
long member the structure must be 4 byte aligned and must be a multiple of 4 bytes in size.
Therefore there are 3 padding bytes at the end of the structure. It would appear in memory
in this order
c1
padding byte
s1 byte 1
s1 byte 2
c2
padding byte
padding byte
padding byte
l1 byte 1
l1 byte 2
l1 byte 3
l1 byte 4
c3
padding byte
padding byte
padding byte
struct example {
long l1;
short s1;
char c1;
char c2;
char c3;
}
Then l1 appears at the correct byte alignment, s1 will be correctly aligned so no need for
padding between l1 and s1. c1, c2, c3 can appear at any location. The structure must be a
multiple of 4 bytes in size since it contains a long so 3 padding bytes appear after c3
It appears in memory in the order
l1 byte 1
l1 byte 2
l1 byte 3
l1 byte 4
s1 byte 1
s1 byte 2
c1
c2
c3
padding byte
padding byte
padding byte
I should point out that structure packing is platform and compiler (and in some cases
compiler switch) dependent.
Memory Pools are just a section of memory reserved for allocating temporarily to other parts
of the application
A memory leak occurs when you allocate some memory from the heap (or a pool) and then
delete all references to that memory without returning it to the pool it was allocated from.
Program:
struct MyStructA {
char a;
char b;
int c;
};
struct MyStructB {
char a;
int c;
char b;
};
int main(void) {
int sizeA = sizeof(struct MyStructA);
int sizeB = sizeof(struct MyStructB);
printf(“A = %d\n”, sizeA);
printf(“B = %d\n”, sizeB);
return 0;
}
22. What is the difference between macro and constant variables in C?
Macros are replaced by preprocessor, but in constant data type will be checked by compiler.
Macros are replaced without checking the values sometimes the programmer want to
change values only in a single function at that prefer to use constant than a macro.
The first technique comes from the C programming language. Constants may be defined
using the preprocessor directive, #define The preprocessor is a program that modifies your
source file prior to compilation. Common preprocessor directives are #include, which is used
to
include additional code into your source file, #define, which is used to define a constant and
#if/#endif, which can be used to conditionally determine which parts of your code will be
compiled. The #define directive is used as follows.
First, the type of the constant is defined. “pi” is float. “id_no” is int. This allows some type
checking by the compiler.&nbs p;
Second, these constants are variables with a definite scope. The scope of a variable relates
to parts of your program in which it is defined. Some variables may exist only in certain
functions or in certain blocks of code.
int s;
t = *x;
*x = *y;
*y = t;
void isr()
int x = 1, y = 2;
swap(&x, &y);
1. Systems development projects usually have a test approach, or test strategy document,
which defines how testing will be performed throughout the lifecycle of the project. The V
model provides a consistent basis and standard for part of that strategy.
2. The V model explicitly suggests that testing (quality assurance) should be considered
early on in the life of a project. Testing and fixing can be done at any stage in the lifecycle.
However, the cost of finding and fixing faults increases dramatically as development
progresses. Evidence suggests that if a fault uncovered during design costs 1.0 monetary
unit to correct, then the same fault uncovered just before testing will cost 6.5 units, during
testing 15 units, and after release between 60 and 100 units. The need to find faults as soon
as possible reinforces the need for the quality assurance of documents such as the
requirements specification and the functional specification. This is performed using static
testing techniques such as inspections and walkthroughs.
3. It introduces the idea of specifying test requirements and expected outcomes prior to
performing the actual tests. For example, the acceptance tests are performed against a
specification of requirements, rather than against some criteria dreamed up when the
acceptance stage has been reached
4. The V model provides a focus for defining the testing that must take place within each
stage. The definition of testing is assisted by the idea of entry and exit criteria. Hence, the
model can be used to define the state a deliverable must be in before it can enter and leave
each stage. The exit criteria of one stage are usually the entry criteria of the next. In many
organizations, there is concern about the quality of the program code released by individual
programmers. Some programmers release code that appears to be fault-free, while others
release code that still has many faults in it. The problem of programmers releasing code
with different levels of robustness would be addressed in the exit criteria of unit design and
unit testing. Unit design would require programmers to specify their intended test cases
before they wrote any program code. Coding could not begin until these test cases had been
agreed with an appropriate manager. Second, the test cases would have to be conducted
successfully before the program could leave the unit test stage and be released to
integration testing.
5. Finally, the V model provides a basis for defining who is responsible for performing the
testing at each stage. Here are some typical responsibilities:
25. What is meant by Black box testing and white box testing?
White-box testing (also known as clear box testing, glass box testing, transparent
box testing, and structural testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality (i.e. black-box
testing). In white-box testing an internal perspective of the system, as well as programming
skills, are used to design test cases. The tester chooses inputs to exercise paths through the
code and determine the appropriate outputs. This is analogous to testing nodes in a circuit,
e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the
software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between subsystems during a system–level
test. Though this method of test design can uncover many errors or problems, it might not
detect unimplemented parts of the specification or missing requirement.
White Box Testing: Means testing the application with coding /programming knowledge.
That means the tester has to correct the code also.
Black box testing: Testing the application without coding /programming knowledge that
means the tester doesn’t require coding knowledge. Just he examines the application
external functional behaviour and GUI features.
Loop Testing
1. Simple Loops
Component testing
Integration testing
System testing
Bit rates measure the number of data bits (that is 0′s and 1′s) transmitted in one second in a
communication channel. A figure of 2400 bits per second means 2400 zeros or ones can be
transmitted in one second, hence the abbreviation “bps.” Individual characters (for example
letters or numbers) that are also referred to as bytes are composed of several bits.
A baud rate is the number of times a signal in a communications channel changes state or
varies. For example, a 2400 baud rate means that the channel can change states up to 2400
times per second. The term “change state” means that it can change from 0 to 1 or from 1
to 0 up to X (in this case, 2400) times per second. It also refers to the actual state of the
connection, such as voltage, frequency, or phase level).
The main difference between the two is that one change of state can transmit one bit, or
slightly more or less than one bit, that depends on the modulation technique used. So the bit
rate (bps) and baud rate (baud per second) have this connection:
The modulation technique determines the number of bit per baud. Here are two examples:
When FSK (Frequency Shift Keying, a transmission technique) is used, each baud transmits
one bit. Only one change in state is required to send a bit. Thus, the modem’s bps rate is
equal to the baud rate. When a baud rate of 2400 is used, a modulation technique called
phase modulation that transmits four bits per baud is used. So:
The primary difference between flash and EEPROM is the way they erase data. While
EEPROM destroys the individual bytes of memory used to store data, flash devices
can only erase memory in larger blocks. This makes flash devices faster at rewriting,
as they can affect large portions of memory at once. Since a rewrite may affect
unused blocks of data, it also adds unnecessarily to usage of the device, shortening
its lifespan in comparison with EEPROM.
Usage
Flash storage is commonly used in USB memory drives and solid state hard drives.
EEPROM is used in a variety of devices, from programmable VCRs to CD players.
28. Can structures be passed to the functions by value?
Ans: yes structures can be passed by value. But unnecessary memory wastage.
main()
Func1();
Func2();
funcs.c
/*************************************
*************************************/
void Func2(void);
/*************************************
*
* Function definitions
*
*************************************/
void Func1(void)
{
puts("Func1 called");
}
32. What is the difference between pass by value by reference in c and pass by
reference in c?
Pass By Reference :
In Pass by reference address of the variable is passed to a function.
Whatever changes made to the formal parameter will affect to the actual
parameters
– Same memory location is used for both variables.(Formal and Actual)-
– it is useful when you required to return more then 1 values
Pass By Value:
– In this method value of the variable is passed. Changes made to formal will
not affect the actual parameters.
– Different memory locations will be created for both variables.
– Here there will be temporary variable created in the function stack which
does not affect the original variable.
33. What is the difference between flash memory, EPROM and EEPROM?
EEPROM is an older, more reliable technology. It is somewhat slower than Flash. Flash and
EEPROM are very similar, but there is a subtle difference. Flash and EEPROM both use
quantum cells to trap electons. Each cell represents one bit of data. The presence – or
absence – of electons in a cell indicates whether the bit is a 1 or 0.The cells have a finite life
– every time a cell is erased, it wears out a little bit. In EEPROM, cells are erased one-by-one.
The only cells erased are those which are 1 but need to be zero. (Writing a 1 to a cell that’s
0 causes very little wear, IIRC)In Flash, a large block is erased all at once. In some devices,
this “block” is the entire device. So in flash, cells are erased whether they need it or not.
This cuts down on the lifespan of the device, but is much, much faster than the EEPROM
method of going cell-by-cell.
Erasure method: Both Flash and EEPROM erase cells by means of an electric field. I think it
is high-frequency and “pops” the electrons out of the Other similar devices are EPROM
(sometimes UVEPROM) and OTPROM (sometimes PROM). EPROM/UVEPROM lacks the
structures that generate the electrical field for erasure. These devices have a window on
top, usually covered by a paper sticker. To erase, the sticker is removed and the device is
exposed to intense ultraviolet light for 30-45 minutes. The only difference between OTPROM
and UVEPROM is that OTPROM lacks the UV window – there is no way to erase the data.
Adding the UV window to the device package significantly increases cost, so there is a niche
for one-time-programmable devices
Erasable Programmable Read Only Memory Chips
The information stored in an EPROM chip can be erased by exposing the chip to strong UV
light. EPROM chips are easily recognized by the small quartz window used for erasure. Once
erased the chip can be re-programmed.
EPROM is more expensive to buy per unit cost, but can prove cheaper in the long run for
some applications. For example if PROM was used for firmware that needed upgraded every
6 months or so – it could prove quite expensive buying new chips!
Non-volatile memory
Non-volatile memory is computer memory that can retain the stored information even when
not powered. Examples of non-volatile memory include read-only memory (see ROM), flash
memory, most types of magnetic computer storage devices (e.g. hard disks, floppy discs
and magnetic tape), optical discs, and early computer storage methods such as paper tape
and punched cards. Forthcoming non-volatile memory technologies include FeRAM, CBRAM,
PRAM, SONOS, RRAM, Racetrack memory, NRAM and Millipede.
On CAN bus controller failure or an extreme accumulation of errors there is a state transition
to Bus Off. The controller is disconnected from the bus by setting it in a state of high-
resistance. The Bus Off state should only be left by a software reset. After software
reset the CAN bus controller has to wait for 128 x 11 recessive bits to transmit a
frame. This is because other nodes may pending transmission requests. It is
recommended not to start an hardware reset because the wait time rule will not
be followed then.
37. What is Virtual functional bus?
Virtual function bus can be described as a system modeling and communication concept. It
is logical entity that facilitates the
concept of relocatability within the AUTOSAR software architecture by providing a virtual
infrastructure that is independent from any actual underlying infrastructure and provides all
services required for a virtual interaction between AUTOSAR components.
39. What is the difference between global and static global variables?
Global variables are variables defined outside of any function. Their scope starts at the
point where they are defined and lasts to the end of the file. They have external linkage,
which means that in other source files, the same name refers to the same location in
memory.
Static global variables are private to the source file where they are defined and do not
conflict with other variables in other source files which would have the same name.
40. How to access a Global variables in other files?
Variables declared outside of a block are called global variables. Global variables
have program scope, which means they can be accessed everywhere in the program, and
they are only destroyed when the program ends.
Here is an example of declaring a global variable:
Similarly, in order to use a global variable that has been declared in another file, you have to
use a forward declaration or a header file, along with the extern keyword. Extern tells the
compiler that you are not declaring a new variable, but instead referring to a variable
declared elsewhere.
Here is an example of using a forward declaration style extern:
global.cpp:
1 // declaration of g_nValue
2 int g_nValue = 5;
main.cpp:
1// extern tells the compiler this variable is declared elsewhere
2extern int g_nValue;
3
4int main()
5{
6 g_nValue = 7;
7 return 0;
8}
Here is an example of using a header file extern:
global.cpp:
1 // declaration of g_nValue
2 int g_nValue = 5;
global.h:
1int nValue = 5;
2
3int main()
4{
5 int nValue = 7; // hides the global nValue variable
6 nValue++; // increments local nValue, not global nValue
7 ::nValue--; // decrements global nValue, not local nValue
8 return 0;
9} // local nValue is destroyed
However, having local variables with the same name as global variables is usually a recipe
for trouble, and should be avoided whenever possible. Using Hungarian Notation, it is
common to declare global variables with a “g_” prefix. This is an easy way to differentiate
global variable from local variables, and avoid variables being hidden due to naming
collisions.
New programmers are often tempted to use lots of global variables, because they are easy
to work with, especially when many functions are involved. However, this is a very bad idea.
In fact, global variables should generally be avoided completely!
Second, global variables are dangerous because their values can be changed by any
function that is called, and there is no easy way for the programmer to know that this will
happen. Consider the following program:
Global variables make every function call potentially dangerous, and the programmer has no
easy way of knowing which ones are dangerous and which ones aren’t! Local variables are
much safer because other functions can not affect them directly. Consequently, global
variables should not be used unless there is a very good reason!
Clearing a bit
Use the bitwise AND operator (&) to clear a bit.
That will clear bit x. You must invert the bit string with the bitwise NOT operator (~), then
AND it.
Toggling a bit
The XOR operator (^) can be used to toggle a bit.
Checking a bit
You didn’t ask for this but I might as well add it.
The historic difference has been price: 8-bit was cheapest, 32-bit was expensive. This is still
true in generally, but the price of 16-bit parts have come down significantly.
Most 8-bit processors are old and run on old architectures, so they tend to be slower. They
are also made more cheaply, since that is where the competition is at the 8-bit point, and
this makes them tend towards slowness. They also tend to have a low limit on supported
RAM/other storage, but the actual amount depends on the family.
16-bit processors tend to focus on price as well, but there is a large range of parts available,
some of which have fairly high performance and large amounts of on-chip peripherals. These
parts usually perform faster than 8-bit parts on math where the precision is greater than 8
bits, and tend to have more addressable memory.
Example: int (*fp) (int, int); -> Function pointer returning an integer
eg:
typedef unsigned int UINT32
Macros [#define] is a direct substitution of the text before compling the whole code. In the
given example, its just a textual substitution. where there is a posibility of redefining the
macro
eg:
#define chPointer char *
#undef chPointer
#define chPointer int *
Typedef are used for declarations when compare with macro
typedefs can correctly encode pointer types.where as #DEFINES are just replacements
done by the preprocessor.
For example,
1. Macro consumes less time. When a function is called, arguments have to be passed
to it, those arguments are accepted by corresponding dummy variables in the
function, they are processed, and finally the function returns a value that is assigned
to a variable (except for a void function). If a function is invoked a number of times,
the times add up, and compilation is delayed. On the other hand, the macro
expansion had already taken place and replaced each occurrence of the macro in the
source code before the source code starts compiling, so it requires no additional time
to execute.
2. Function consumes less memory. While a function replete with macros may look
succinct on surface, prior to compilation, all the macro-presences are replaced by
their corresponding macro expansions, which consumes considerable memory. On
the other hand, even if a function is invoked 100 times, it still occupies the same
space. Hence function is more amenable to less memory requirements
48. What is inline function?
Inline function is the optimization technique used by the compilers. One can simply prepend
inline keyword to function prototype to make a function inline. Inline function instruct
compiler to insert complete body of the function wherever that function got used in code.
Advantages :-
1) It does not require function calling overhead.
2) It also save overhead of variables push/pop on the stack, while function calling.
3) It also save overhead of return call from a function.
4) It increases locality of reference by utilizing instruction cache.
5) After in-lining compiler can also apply intraprocedural optmization if specified. This is the
most important one, in this way compiler can now focus on dead code elimination, can give
more stress on branch prediction, induction variable elimination etc..
Disadvantages :-
1) May increase function size so that it may not fit on the cache, causing lots of cahce miss.
2) After in-lining function if variables number which are going to use register increases than
they may create overhead on register variable resource utilization.
3) It may cause compilation overhead as if some body changes code inside inline function
than all calling location will also be compiled.
4) If used in header file, it will make your header file size large and may also make it
unreadable.
5) If somebody used too many inline function resultant in a larger code size than it may
cause thrashing in memory. More and more number of page fault bringing down your
program performance.
6) Its not useful for embeded system where large binary size is not preferred at all due to
memory size constraints
49. What is the difference between a macro and a inline function?
Macros :
1. input argument datatype checking can’t be done.
2. compiler has no idea about macros
3. Code is not readable
4. macros are always expanded or replaced during preprocessing, hence code size is more.
5. macro can’t return.
Inline function :
1. input argument datatype can be done.
2. compiler knows about inline functions.
3. code is readable
4. inline functions are may not be expanded always
5. can return.
50. Preprocessor Statements #ifdef, #else, #endif
These provide a rapid way to “clip” out and insert code.
Consider;
#define FIRST
main()
int a, b, c;
#ifdef FIRST
#else
printf("Enter a:");
scanf("%d", &a);
printf("Enter a:");
scanf("%d", &a);
printf("Enter a:");
scanf("%d", &a);
#endif
additonal code
Note that if FIRST is defined (which it is in the above) the values of a, b and c are hardcoded
to values of 2, 6 and 4. This can save a lot of time when developing software as it avoids
tediously typing everything in each and everytime you run your routine. When FIRST is
defined, all that is passed to the compiler is the code between the #ifdef and the #else. The
code between the #else and the #endif is not seen by the compiler. It is as if it were all a
comment.
Once you have your routine working, and desire to insert the printf and scanfs, all that is
required is to go back and delete the the #define FIRST. Now, the compiler does not see the;
The receivers calculate the CRC in the same way as the transmitter as follows:
How to change the baud rate in CANoe without changing the code?
The bit rate may be changed by either changing the oscillator frequency, which is usually
restricted by the processor requirements, or by specifying the length of the bit segments in
“time quantum” and the prescaler value.
In Canoe tool, we can change the bus timing register 0 & 1 values for correcting the baud
rate.
In Autosar, we can use post build configuration for CAN baudrate values.
In general, an operating system (OS) is responsible for managing the hardware resources of
a computer and hosting applications that run on the computer. An RTOS performs these
tasks, but is also specially designed to run applications with very precise timing and a high
degree of reliability. This can be especially important in measurement and automation
systems where downtime is costly or a program delay could cause a safety hazard. To be
considered “real-time”, an operating system must have a known maximum time for each of
the critical operations that it performs (or at least be able to guarantee that maximum most
of the time). Some of these operations include OS calls and interrupt handling. Operating
systems that can absolutely guarantee a maximum time for these operations are commonly
referred to as “hard real-time”, while operating systems that can only guarantee a
maximum most of the time are referred to as “soft real-time”.
Example: Imagine that you are designing an airbag system for a new model of car. In this
case, a small error in timing (causing the airbag to deploy too early or too late) could be
catastrophic and cause injury. Therefore, a hard real-time system is needed; you need
assurance as the system designer that no single operation will exceed certain timing
constraints. On the other hand, if you were to design a mobile phone that received
streaming video, it may be ok to lose a small amount of data occasionally even though on
average it is important to keep up with the video stream. For this application, a soft real-
time operating system may suffice. An RTOS can guarantee that a program will run with
very consistent timing. Real-time operating systems do this by providing programmers with
a high degree of control over how tasks are prioritized, and typically also allow checking to
make sure that important deadlines are met.
How Real-Time OSs Differ from General-Purpose OSs?
Operating systems such as Microsoft Windows and Mac OS can provide an excellent platform
for developing and running your non-critical measurement and control applications.
However, these operating systems are designed for different use cases than real-time
operating systems, and are not the ideal platform for running applications that require
precise timing or extended up-time. This section will identify some of the major under-the-
hood differences between both types of operating systems, and explain what you can expect
when programming a real-time application.
Interrupt Latency
Interrupt latency is measured as the amount of time between when a device generates an
interrupt and when that device is serviced. While general-purpose operating systems may
take a variable amount of time to respond to a given interrupt, real-time operating systems
must guarantee that all interrupts will be serviced within a certain maximum amount of
time. In other words, the interrupt latency of real-time operating systems must be bounded
How to find the bug in code using debugger if pointer is pointing to a illegal
value?
If two CAN messages with same ID sending at a same time, different data which
can node will gain arbitration? How to test it?
Is it possible to declare struct and union one inside other? Explain with example
1. Spi and I2C difference.?
2. What is UDS advantages?
3. What is cross compiler
4. Unit/integration/all testings.
5. Regression testing.
6. Test case types.
7. Malloc calloc
8. Function pointers Advantage where it is used?
How many can database files are required for CAN Network simulation in CANoe tool.
Is it possible to simulate other ECU’s Except Test ECU without CAPL Scripting in CANoe tool?
Twitter
Facebook6
2. Vijaisankar
JULY 24, 2013 @ 6:54 AM
REPLY
3. pavan
AUGUST 2, 2013 @ 8:54 AM
truly good :)
Thanks.
REPLY
4. Sonali
AUGUST 2, 2013 @ 11:51 AM
REPLY
5. ajay
AUGUST 6, 2013 @ 12:01 PM
REPLY
6. Robinson Mathew
AUGUST 13, 2013 @ 5:30 PM
very useful….thanks
REPLY
7. Naga
AUGUST 20, 2013 @ 2:05 PM
REPLY
8. Rajasheker
AUGUST 22, 2013 @ 7:15 AM
REPLY
9. Jagdish
AUGUST 28, 2013 @ 9:11 AM
Really Useful for job seekers to get prepared to answer such questions and the questions are
relevant.
REPLY
Really Good
REPLY
REPLY
12. Hari
SEPTEMBER 27, 2013 @ 3:05 PM
13. Ravisagar
OCTOBER 3, 2013 @ 5:58 PM
Excellent work …..Thanks a lotttt……its very usefule for experienced candidates also………
wonderful job………really good job
REPLY
14. Bhushan
OCTOBER 10, 2013 @ 8:31 AM
rally useful
REPLY
15. Bhushan
OCTOBER 10, 2013 @ 8:31 AM
really useful
REPLY
16. pathmaraj
OCTOBER 17, 2013 @ 2:45 PM
17. vinoth
OCTOBER 27, 2013 @ 8:02 AM
REPLY
18. sathish
NOVEMBER 7, 2013 @ 2:46 PM
Thank you…
Excellent work
REPLY
19. VIJAY PM
DECEMBER 14, 2013 @ 5:00 AM
Thank a lot…. :)
REPLY
20. Shwetha.N
JANUARY 30, 2014 @ 11:26 AM
Very Usefull Thank you :-)
REPLY
21. singhlovesuuu
FEBRUARY 12, 2014 @ 6:46 AM
No Words, simply superb….. :)
REPLY
22. website
MARCH 1, 2014 @ 6:38 PM
Wonderful website. Plenty of helpful info here. I am sending it to several buddies ans
additionally
sharing in delicious. And of course, thank you on your
sweat!
REPLY
23. Dinesh
APRIL 17, 2014 @ 12:37 PM
24. sachin
APRIL 18, 2014 @ 8:40 PM
REPLY
25. Jayakumar-krishnagiri
APRIL 21, 2014 @ 11:56 AM
Good – winjkg
REPLY
26. Raghu
MAY 20, 2014 @ 12:06 PM
REPLY
27. Vijay
MAY 21, 2014 @ 5:46 PM
REPLY
28. Babu
JUNE 6, 2014 @ 12:52 PM
HI this Babu really Superb when every i went to interview i faced Al these questions.i go
through session its really useful for me. i got answer for some of the questions
awesome.Thanks for your Valuable time to spend and post these questions.Nice
REPLY
29. Nam
JUNE 24, 2014 @ 1:29 AM
REPLY
30. Anvesh
JULY 2, 2014 @ 1:30 AM
I am in fact delighted to glance at this blog posts which includes tons of helpful facts,
thanks for providing such data.
REPLY
32. divya
AUGUST 19, 2014 @ 10:36 AM
Thank you for the FAQ’s with Answers.thank you very much.
REPLY
Answer:
Usually this will lead to a system exception. Every core (CPU) has its own way of registering
the address which caused this exception. So to solve it, you need to dig a little in the core`s
manual. Its usually some specialized registers (ex: PowerPC architecture), or the address is
in the stack at a certain location (ex: ARM). Following that address will lead you to the
invalid pointer.
REPLY
34. chenna
OCTOBER 9, 2014 @ 2:48 PM
Excellent tutorial..Covered most of the embedded system questions.Thanks a lot keep it up.
REPLY
35. Dhananjay
OCTOBER 17, 2014 @ 12:48 PM
Thank you very much for useful information. If you Have more information on CANalyzer/
CANoe / CAPL please keep posting
REPLY
36. chetu
NOVEMBER 7, 2014 @ 10:14 AM
37. Amol
DECEMBER 2, 2014 @ 12:27 PM
REPLY
REPLY
39. Mahan
DECEMBER 5, 2014 @ 10:27 AM
Super article, Covered almost all points.. Thanks for the Post:)
REPLY
40. Prabhu
DECEMBER 5, 2014 @ 11:06 AM
Friends,
Please share this link to Employee and not to Employer!!!!>…..Hope you guys understand
REPLY
41. Tushar
JANUARY 8, 2015 @ 2:03 PM
thank you so much for this post :)
REPLY
42. murali
JANUARY 23, 2015 @ 8:51 AM
i was preparing on such things for 5 months if this document got earlier for me ,, i would be
more lucky..
REPLY
Thank you dude… Tons of info for automotive embedded developer (beginner)…
REPLY
44. nivedita
FEBRUARY 10, 2015 @ 10:47 AM
very useful post… thank u so much… and hats off to your efforts…covered everything in one
article
REPLY
45. rajath
FEBRUARY 26, 2015 @ 7:03 AM
very useful
REPLY
46. abhilash
MARCH 13, 2015 @ 3:27 PM
Cross Compiler: Code will be compiled in one machine and compiled code will run in other
machine.
Native Compiler: Compiled code will be executed in the same compiled machine
REPLY
47. mahmoud
MARCH 16, 2015 @ 10:42 PM
thx alot dude very helpful article
REPLY
48. ANC
MARCH 23, 2015 @ 6:01 PM
I’ ve got a doubt regarding internal diagnostic message in CAN network how is the frame? do
they have something special or do they work as a remote frame where you ask for some
data?
REPLY
Leave a Reply
Search Tool
Search
Site Traffic
128,448 hits
Recent Posts
How does a car work?
CAN Calibration Protocol (CCP)
UDS – General vehicle diagnostics
Memory Map in C
KWP2000 and UDS Difference
CAN Basics
Author - Sudhakar Maradana
Follow
Archives
Archives
Calendar
M T W T F S S
« Jun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
June 2015
Blog at WordPress.com. | The iTheme2 Theme.
Follow
Sign me up