Part 3 - Java programming Advanced 1
Part 3 - Java programming Advanced 1
Advanced
1. EXCEPTIONS WITH BEST PRACTICES ......................................................................................................... 4
1.1. BASICS ............................................................................................................................................... 4
1.1.1. Generate exception .................................................................................................................. 4
1.1.2. Handle one exception............................................................................................................... 5
1.1.3. Multiple exceptions .................................................................................................................. 6
1.1.4. Exception can be referenced polymorphically ........................................................................... 6
1.2. CHECKED UNCHECKED EXCEPTIONS ........................................................................................................... 7
1.2.1. Checked exceptions .................................................................................................................. 8
1.2.2. Unchecked exceptions .............................................................................................................. 8
1.3. FINALLY BLOCK ..................................................................................................................................... 9
1.4. TRY-WITH-RESOURCES STATEMENT ......................................................................................................... 10
1.5. SUPPRESSED EXCEPTIONS ...................................................................................................................... 13
1.6. CREATING NEW EXCEPTIONS (BEST PRACTICES) .......................................................................................... 15
1.7. ASSERTIONS ...................................................................................................................................... 19
2. INPUT-OUTPUT ...................................................................................................................................... 23
Good programmer thing about this situations and would write code to handle them
This leads to robust software otherwise your code will be very fragile
1.1. Basics
An exception is simply an object (like any others java objects) of the class Throwable or
one of its subclasses.
When we generate exception in particular method and we don’t handing it in this method,
we should indicate this exception in method declaration for reason to all others methods
use this particular method know that this method generate new exception we it’s should
handled it.
To indicate this exception in method declaration, you should use keyword throws
followed by one or more class’s exception like this example
You can handle in method declaration many exceptions class separated by ‘,’ like this
example:
When an exception is generate in try block, then the code in catch block will executed
automatically
Catch block take like parameter instance of exception that it will handler
If you want to write an recovery code when an exception is generated, you need to write
it in catch block
When exception is generated in try block then all rest of code in this block will be
skipped and will execute handler code in catch block
You can handle generated exception in any part of tree called methods of method
when you have generation exception
If an exception is generated, all rest of code will be skipped until find handler
block (try/catch), the exception will be passed to the JVM and JVM will first see if there is
an exception handling code within this method and it’s not there. Then it’s go to the call stack
or all tree of methods and its try to see if there is an exception handler in call stack. If
o Exist exception handler in method or call stack: then it execute catch
block
o Not exist: it print exception message and tree of exception itself in
console and then skip all rest of code
When you have relation of subclass and super class in many catch blocks you should
placed that from subclass to super class (subclass exception handler should be first
than super class) otherwise you get compiler error.
So we need one invocation statement and it need to executed for sure we need
guaranty.
For that we need use finally block. It’s an optional block and when present it always
executed. It not executed only when there are severe JVM error or system exit
(terminate JVM itself)
Finally block must followed (after) catch blocks. We cannot put finally block between try
and catch blocks or between 2 catch blocks. If catch block not exist then it can be
followed try block
Try block must have at least one catch or finally blocks
We cannot have more than finally blocks associated with try block
We look now control flow when finally block is exist:
In this example we have finally block is very ugly with the try/catch block and the
additional null check and also close method invocation
To avoid this, we have try-with-resources statement introduced in java 7. Here in the
next picture we have the same code but with try with resources. Its looks much cleaner
without finally block and we have only try/catch block
Here resources are created within parenthesis that follows the try keyword.
Declaration is done within the parenthesis and it not work if declaration is done
outside of parenthesis and initialization is done within parenthesis
With try with resources, the entire cleanup operation is taken care of implicitly the
close method is invoked implicitly after the try block is executed (Automatic
resource management ARM)
In reality the compiler inserts in the end finally block into the byte code. If an
exception is generated, the close method is still invoked just in the case of using
finally block explicitly
The resources created (variable ‘in’) in the parenthesis is implicitly final. So it
cannot be reassigned within the try block
All exceptions generated in implicitly finally block in try-with-resources are
suppressed (not importants)
Try-with resources, multiple resources:
General syntax:
You should not extends Error, you should not sub classed it and that’s because there is a
strong convention that errors should be generated only by the JVM because its JVM
related and there are abnormal conditions. You can do its but you shouldn’t do it
Item 58: Use checked exceptions for recoverable conditions and runtime
exceptions for programming errors.
Its recommended to use :
o Checked exceptions: when you have recoverable conditions. That’s means
when you have exception situation like partner would be down or database
unavailable.
o Runtime exception: when you have programming error. That’s mean for
example in division operation you cannot have Zero in divisor.
Should I use Checked or Unchecked? An important role in deciding to go with
a checked exception would be:
o That it happened due to some exceptional situation and not programming
error.
o Caller can be expected to reasonably recover from it
Item 65: don’t ignore exceptions.
It neither’s nor recommended in java programming to ignore exception.
A very common mistake is to ignore exceptions by using empty catch blocks. If
the API designer is throwing a checked exception, then catching it with an empty
catch block implies that you are ignoring the exceptional situation, which the item
relates to a fire alarm.
Similarly, if an empty catch block is catching an unchecked exception, then it is
equally dangerous as the program might continue to work silently, but might
result in some other very serious errors.
You can ignore exception by handle exception by catch block but you don’t do
any treatment in catch block (empty catch block) like this example:
You can do this by using constructor of exception when you pass message and
throwable class
Item 57: Use exceptions only for exceptional conditions
Don’t use them for regular control flow
For example use next item in try catch and in last item when you use next item method
it throw an exception. This is regular control flow like this picture
If one or both of these conditions do not hold, then an exception would be more
appropriate.
Take time to carefully document all exceptions that your method throws by using
annotation @throws used for document exceptions
Item 38: check parameters for validity
It’s a preconditions checks which we already discussed
Should think about parameters restrictions ( )قيودwhen writing new method or
constructors
If there are any restrictions, you need to check them at beginning of the method and
throw any unchecked exceptions and also document them
For non public methods, its recommended to do parameter checks using another
feature of java called Assertions which we will se next
1.7. Assertions
They are mainly about detecting errors during development and testing time
There are 2 factors that contribute to software liability
Robustness: is software’s ability to withstand any errors that may happen in
exceptional situations. That is software should be continue to execute in such
situations
Can do that using exceptions and exception handling
Correctness: deals with software correctly
Assertions can help with correctness
Normally when we develop software we write our code based on what we gather from
our product managers or any requirement documentations.
In the process, we make certain assumptions ( )االفتراضاتand most of the time
these assumptions would be right. But sometimes they may wrong to and it would
be ideal to detect them at development time itself
So assumptions are kind of Boolean expressions and we need to be able to test it and
assertions would helpful in testing our assumptions. That is correctness of our
programs
An assert statement simply checks a boolean condition, and does nothing if it is true but
immediately terminates the program if it is false
benefits:
assertions are very effective in quickly detecting by bugs during development
time
it implies an assumption that you think is right so if it fails for particular instance
of data then it means that your assumption is wrong and you have to fix it
It serves as documentation. In some situations assertions can be better than
comments for example let’s assume there is a non obvious competition and there
is also comment explaining that but if some of assumption change and
consequently the competition logical changes then comments will also have to
change otherwise comment can get out of date. But assertions on the other hand
check it runtime and so would most likely fails and will force the developer to
update them
In that sense assertions are like active comments
(Item 38) Assertions can be used in anywhere in the program but one good place to use
that is for validating parameters if parameter have any restrictions or preconditions as this
particular item recommended (item 38)
Public methods: the item suggests throwing an unchecked exception if there is
violation of preconditions. This would be programming error in the API client
You shouldn’t use assertions in public methods because:
Assertions are disabled by default
Assertions throw universal assertion error which may not be helpful for API clients in
fixing problem
Non public methods: the item suggests using assertions
Assertions are not in working directly by API client it is internal code that is invoking
them
The owner of code would be expected to test them extensively during development with
help of assertions. This way in production non public methods should always be
invoked by correct parameter values
Can have Junit and assertions in same code?
Assertions are complement unit testing
Unit testing is all about automated repeatable testing of program correctness. That
is non of program changes should break out tests
Should use assertion in:
o Update some logic or your project and relevant assertions then how would
you test new logic
Junit is about entire methods itself and not about method parameters or a single
statement within a method. Its generally about block of code
But with assertions most of the time its less granular like single statement or
validating one parameters
Enabling or disabling assertions:
They are disabling by default. To ensure that they are not performance liability on
production systems
To enable them, the command line –ea (or –enableassertions) can be passed to
java interpreter.
It can enable or disable at specific class and package level too
o To enable: -ea
o To disable: -da
To enable or disable assertions using eclipse you can go to run as > run
configuration > Arguments and in JVM argument you write the command
2. Input-Output
In any software, it’s very common to read data from some source and also to write some
data to some destination
Source or destination can be a file which can be on local machine or even on some
remote FTP server. In this case, you can read something from file or write some things
into a file
Example: read data from CSV file or generating CSV file
I/O layer is simply a software component that specialized in doing a reading and
writing
Similarly, when interacting with web service like REST API, you would request some
data and process the response and when requesting the data you may sometimes to
send some data to the API that is the write data into the network which will be passed
into the API server
Sometimes, we have to download web pages (for example in our app we would
download all the web links in the system)
The same bit pattern would represent something else in an image file
Single byte can be any of 256 patterns (28 ). It represent as many as 256characters
using different variations of a byte
But to represent more characters we need to use more than one byte
Processing text is very complex due to the numbers of ways in which characters in those
languages can be represented So numbers of encoding schemas
All programs like browser is not understand all characters of words. So any character not
understands it will replace with ? (usually for international characters)
Encoding schemas:
Every file uses some encoding schema to represent content.
It’s basically an algorithm which maps characters to hexadecimal numbers and
whose binary representations are used for storage
Example encoding schemas: ASCII, UCS-2, UTF-16, UTF-32
Each encoding schemas is implementation of some characters for example UTF-
16 and UTF-32 are all implementation of Unicode characters
Character set:
ASCII: 7-bit for unaccented English characters
ISO/IEC 8859: standard 8-bit ASCII extensions. It cover 50different variation for
50 different regions in the word
DBCS (Double Byte Character Sets): Asian characters
Resulted in decoding issues: for instance a hexadecimal code of a particular character in
one country would correspond to a completely different character in some other country
For these problems Unicode is come to cover all languages in the world
Unicode:
Maintained by Unicode consortium
It is backward compatible with 7-bit US ACSII
Initial assumption is that
o 16 bits which represent 65536 chars would suffice to cover all language
o These character together as group are referred to as Basic multilingual
Plane (BMP)
o UCS-2: its used exactly 16 bit to represent any character
To get coding of string with Java you can use method get Bytes method and
passed to it name of encoding you need like these examples
2.2. Stream IO
In stream IO reading and writing is handled by something called streams
Stream
It’s basically a connection between java program and a data source or sink
(destination)
it’s basically represented by a class
it’s also specific to the type of source or sink
o if source or sink is file: the you use specific type of stream
o when dealing with network different type of stream is used
there are 2 types of stream:
o input stream: read some data from source
o output stream: write some data to a destination
when working with streams, there are three operations that are involved
o open stream
o read/write data
o Close stream: would free system resources (socket or file handled) that the
stream would be used. Can close it inside finally block.
The operating systems have limits on the number of sockets of file handles that can
open (you risk to can’t open any new stream). So you should close stream after used it
Standard template for open and closing streams:
Stream classification: streams are 2 types (because java differentiate between processing
character data and anything else
Byte streams: used for non characters data like images. It have also 2 types
o InputStream: subclass for all classes used for reading non characters data
from anything
o OutputStream: based class for all non characters output stream
Character streams: used for characters data like text. It have also 2 types
o Reader: based class for all characters input streams
o Writer: based class for all characters output stream
2.2.1. Byte Streams
Byte streams are used when we want to build with raw bytes. That is e want to read row
bytes serially or write raw bytes serially
Character streams are also build on top of byte streams
Read operation:
Method for reading exactly one byte at a time:
All read calls are blocked when no data are available (when data are not available,
then it will wait until the data become available) like reading data from network and
the sender not sending the data now.
Write operations:
Method for writing one byte at a time to the output stream
o Input parameter is an integer (32 bits) but this method needs to write only one
byte
o Its write the least significant byte that is the 8 lower order bits
Method used to write groups of bytes
o
Its concrete method
It’s used to write groups of bytes
o
o
The content to write is store in input array (byte[])
o
Has 3 parameters :
Byte array:
Offset
Length
o It will write length number of bytes from the input array starting at an index
position offset
o Internally is repeatedly invokes the abstract write method (of subclasses
invoked)
There are another write method like this method and it take at input only byte array.
It’s simply same method but passed 0 like offset and length of input array like length.
It write the array completely starting from index position zero
We're going to create a Shape interface and concrete classes implementing the Shape
interface. We will then create an abstract decorator class ShapeDecorator
implementing the Shape interface and having Shape object as its instance variable.
RedShapeDecorator is concrete class implementing ShapeDecorator.
DecoratorPatternDemo, our demo class will use RedShapeDecorator to decorate
Shape objects.
2.2.1.3.2. File Input Stream
It’s used for reading data from files
Its subclass for Input stream
We can create instance of file input stream using constructor with parameter string like
file name
If file does not exist, the constructor does not create new file. It would simply throw a file
not found exception
The constructor would generate File not found exception if:
File to read does not exist
File is directory
Cannot be opened for any other reason
This constructor would create new file if this file not exist but if file exist it would
override the file
The constructor would generate File not found exception if:
Cannot be created for writing
File is directory
Cannot be opened for any other reason
Chained streams are that streams that are using the output of another stream as
their input in the pipe. (Example: BufferdInputStream)
It’s like a dependent connection that uses an underlying Connection Stream or
another Chained Stream to receive output or feed input in order to complete end
to end connection.
To provide buffering a buffer stream will be chain are connected to another
stream
Read Operations:
Read method in buffer input stream
o It will read length number of bytes from the buffer into the input array
starting in index position offset
o It return number of bytes read or -1 if end of stream is detected
o We will look now how method is implemented:
If buffer have request amount of data, then data would be copied
from the buffer into the input byte array
If buffer have not entire ( )كاملdata, then
The partial data it first copied into the input array.
Next buffer is fill by fetching data from underline stream (
read method for chained stream is invoked )
The buffer itself passed to the input itself.
Finally the remaining data copied from the buffer into the
input array
…
Write operations:
Write method of buffer output stream:
o It write length number bytes from the array starting at offset into buffer
o If buffer have not space, the buffer is first flushed (copied their content to
underlying stream: file output stream) and then array content are copied
into the buffer.
o To flush buffer the write method of the underlying chained stream is used
and simply passed buffer as argument
o But if input array size is greeted or equal to the buffer size then buffer is
fast flushed the input array are directly returned to the underlying stream
Decorator.close():
When close method for decorator is invoked then it would internally invoke the
close method of underlying decorator stream
If the decorator is buffer output stream, then it invoke close method would fist
flush all buffer contents and then close then underlying stream is invoked. For
flushing internally flush method is used but you don’t have to invoke this method
explicitly as both right as well as this close method flush the buffer contents
automatically
There is nothing like closing a decorator’s stream as it is simply providing some
functionality like buffering that it is not really dealing with some physical device like
in the case of file stream. So decorator invokes the close method of the underlying
stream.
o Input parameter is an integer which has 32bits. So the method write the
lower 2 bytes that is a 16 bits that appear on the right most and rest of the
bits are discarded
Method used to write groups of characters
o It will write length number of characters from the input character array
starting at the index position offset
File Writer
o It user for write characters to files
o It build on top of File output stream (it used internally file output stream)
o You can instantiated by passing file name to constructor like this
These classes used default encoding (JVM has default encoding that is the default
character encoding of the underlying operating system its CP1252 on windows
and UTF-8 on Linux)
o To change default encoding for JVM you can use this command line
o To know what encoding your JVM uses you can use this way
o It’s not recommended to use these classes or methods because you have no
control to these configurations.
Preferred approach: is to use these 2 classes. InputStreamReader and
OutputStreamWriter
they enable you to set the character encoding
these are general purpose classes for doing the translations between byte and
character streams
input parameter of constructor are abstract input stream and output stream that is
not specific to file streams and can be passed any byte stream objects
o input stream will translate from byte to characters
o output stream will translate from bytes to characters
File Reader and File Writer they actually extend these classes as these classes are
more generics
to use top efficiency you would wrap them with BufferedReader or
BufferedWriter which provide additional buffering capability
benefits of using these 2 classes is general purpose and you can set character encoding
off your choice
Buffering:
BufferedReader:
o use to read some character data from file using chain stream
o their constructor take input character stream any classes extends from
Reader
o Methods:
method for reading one character ( like in Reader class): it read a
single character
o Reading text from File: example of using buffering reader for reading
text file which named as ‘go.txt’.
o Reading text from a console: example for reading text from console
BufferedWriter:
o used for write some characters data to file using chain stream
o their constructor take output character stream any classes extends from
Writer
The construction if file class does not throw an exception even if pathname is
invalid here
We referee that file input stream and file output stream classes have constructors
which take file class as input. But the constructor that take filename as input is
more commonly used.
Deserialization process:
A. First read the serialized object
B. JVM find the serialized object’s class name and try to load the correspond class
object
C. JVM compares the version of the loaded class with the version of the serialized
object. It should be match. Otherwise the class has evolved after the object was
serialized and they are no longer compatible
Deserialization will fails if:
For some raison, the class object cannot be loaded that’s for JVM could
not find the class file or for some other raison to
Version mismatch and JVM throw an exception
If there is no version mismatch, then JVM would create space in the heap for the
serialized object and will recreate the object with the same state. Object
constructor not run
If there is no serializable ancestor in the inheritance tree in serialized
object then
Ancestor constructor will be run along with any constructors of all
its direct or indirect super classes
Instance variables will get serialized state
Transient variable get default value
Example of deserialization:
If you want instances of your class to be used in for each statement then you need to
Implement Iterable interface
Provide an implementation for iterator method
Collection interface includes several fundamental methods common to all collections and
it classified into 3 categories:
Category 1 - Basic operations :
Category 2 - Bulk operations :
In addition to methods that is inherited from collection interface. It’s also adds
several new methods
List interface contain several new methods. These methods are classified to 4 categories:
Category 1 – positional:
Category 2 – search:
Category 3 – iteration:
You would want to use linked list if you iterating and during the iterating if you
have frequent add/remove operations you would go for it. Because in linked list
when you would add/remove element. It’s just linked with correct way
Its better data structure for remove all or retain all operations
These are more methods if linked list which have linear time complexity
O(n)
Get(i): n/2 operations
Add(i,e) : n/2 operations
Remove(i) : n/2 operations
indexOf(Object)
lastIndexOf(Object)
Java support only doubly linked list
A linked list is basically a doubly linked list implementation of List and
Deque interfaces
Because it models LIFO and FIFO operations in constant time O (1).
It also allows duplicates and nulls values
Constructor with Integer parameter: you can specify a fix capacity by add
integer parameter which represent size of deque
Constructor with collection parameter: construct the array deque with the
elements from the collection parameter added
Its recommended to use array deque then linked list because many reasons:
From java doc: array deque is faster than linked list if the linked list is used as a
queue
Array deque is around 3 times faster than linked list for large queue. The main
reason for that is with linked list we have to create an additional null object with
every newly added element (array deque is more memory full deque)
Why can’t use array list like a FIFO? We simply need to use add method for adding
element to the tail and remove of zero for removing element from head
Performance:
Invoking remove of zero on an array list would shift all the subsequent
elements so it has linear time complexity
Gets bad with large number of elements because we have more and more
elements shifts
For example for 1000 elements, linked List is 20 faster than array list for queue like
access
With array deque we don’t have elements shifts as its implementation is
base on something called circular array which does not evolve elements
shifts. It could constant time for adding and removing instance
Intention:
Intention when using array deque is too used as FIFO or LIFO which
involve only head or tail manipulation.
Methods like pop peak and push very clearly referent that
But with array list, do not constrain ourselves to only head and tail
Methods complexity:
Most methods run in amortized constant time that is mostly constant time
Some methods for removing or searching specific objects would be linear
complexity as we need to scan the queue like:
Remove object (it’s not remove head element)
Remove first concurrency
Remove last concurrency
Contains
Each key value pair is also referred to as a mapping and hash table is also referred to a
dictionary.
Key operations: all these operations happen in a constant time so hash table is really fast
Insert key value mapping into hash table
Searching mapping using given key
Remove particular mapping by using a key
Hash table characteristics:
It cannot contain duplicate keys
It can contain duplicate value
Each key maps to at most one value
There are certain implementation allow null value but also allow one null key. But
some others implementations do not allow nulls for both keys as well as values
Illustration of hash table is implementation:
Its associative array
We have array and each element of these array is referenced to linked list which
stored the actual key value mappings
The array does not store the key value mapping but the linked list is store that
Each linked list can have multiple mapping to
For each key value mapping there are function called hash function is applied on
the key and the result is the value in the array (for that we can have many
mapping in same linked list)
Hash function:
It’s basically a function of the key and the array size
The simple function of hash function is key mod array size
Its quickly locate the target for this it should be highly efficient
It should be able to dispose the elements as uniformly as possible into the buckets
Inserting operations (separate chaining):
Just look at the bucket for a given key by using hash function and we check if the
bucket is empty
If the bucket is empty then we create new linked list in that index and add
the mapping into that linked list
If bucket is not empty (means there is collision which means many
mapping in some bucket) then we check if linked list has another mapping
with same key
o If there is no such mapping then the mapping is added at the front
of the list
o If there is mapping with same key then value in the old mapping
is overwritten using the new value
So here we have collision and we are using collision resolution strategy called
separate chaining
Others factors besides a good hash function that contribute to performance:
Capacity: number of bucket in the hash table. Default is 16
Load factor: how from the hash table can get before its capacity is automatically
increased. Default load factor is .75
Resizing hash table is involves expansive rehashing because when we resize hash table
we should change all buckets of all intern inked lists.
When you have very large numbers of mappings, so you should create new hash
table with very large initial capacity
If you used very high load factory then you may have less frequently hashing but you
have lot of collision too and that can slow things
If you have very large numbers of mappings, then you should have some
experiments to choose the right factor
Applications of hash tables:
Database indexing:
NoSql databases: because it store key value mapping
Switch statement: internally it used hashing to quickly look at matching case
blocks
Key: hash set stores only individual objects those objects will be stored as keys
Value: an empty object will be stored as value (the empty object is an instance of
object class)
You can show add method of class Hash set if you are interested
It also allows one null value
Typical use case: it is useful when the needed is
Rapid lookup, rapid insertion and rapid deletion O(1)
Insertion order is not important
Its better then array list if removeAll or retainAll operations are to be use
frequently.
In array list each invocation of remove all method will be index all
elements and remove elements So its complexity is linear O(n)
In hash set, complexity of remove all or retain all has constant complexity
O(1)
Item 52: Refer to objects by their interfaces
Its recommended when you create an object to refer to their objects by their
interface like this example
3.6.3. LinkedHashSet
It’s another set implementation
It uses a hash table as well as a linked list in its implementation of the set interface
Linked hash set has many characteristics:
Preserves insertion order (using double linked list):it has a doubled linked list
running through its elements which helps in preserving the order of inserted
elements
It extend Hash set so its almost as fast as hash set
Like hash set, rapid lookup insertion and deletion has constant time O(1)
It permit only one null element
Internally it used LinkedHashMap
All set implementation uses internally their corresponding map implementation
Use LinkedHashSet if you want:
Fast lookup insertion and deletion
Insertion order is important ( is preserved )
Better then array List if remove All and retain All operations are to be used frequently
When you want to iteration the elements, LinkedHashSet is faster than Hash Set.
That’s because:
Iteration with LinkedHashSet is dependent on the size of the set due to use
of double linked list
Iteration with Hash Set is dependent on the capacity (its number of
buckets in the Hash Table that is the array size)
It returns a sorted set that includes elements that are less than the
input argument ‘to element’.
o Tail set method:
It returns a sorted set that include elements that are great or equal
to the input argument ‘from element’.
End points:
o First method:
It return the greatest element that is less that the target element E
which is the input to the method
greatest element < E
It would return a null if no such element would be fun.
o Floor method:
It’s similar to lower method but it returns the greatest element that
is less than or equal to the target element E
greatest element <= E
o Ceiling method:
Iterators:
o Iterator methods:
Iterator method:
It return iterator class for regular iteration
Descending iterator method:
It return iterator class for iterating in the reverse order
o Descending set method:
K: is the class which represent the type of all keys in the map
V: is the class which represent the type of all values in the Map
it has many types of operations (many type of methods)
Basic operations:
Put method:
It simply add the key and value pair into the map
If there is already map with the same key then the value in that
mapping will be overwritten with the new value. Also the old value
will be return with this method
If there is no mapping with that key then the new mapping will be
inserted and the null value will be returned
But if the map implementation allows null value and there is
matching key with the null value then that null value will be
returned
get method:
it returns the value corresponding to the given key
if there is no mapping with that key, then a null is returned
remove method:
it remove a mapping for the input key and it returns the
corresponding value
it would return null if the mapping with matching key was not
found
size method:
it would return a size of the map
is empty method:
it return a true if the size of the map is zero which means it is
empty
Bulk operations:
Put all method:
It would add all the mapping from the input map into the current
map
It’s similar to the add all method in the collection interface
Clear method:
It simply remove all the mappings from the map
Values method:
It would return collection view of all values in the map
All properties we discussed to the key set method, will also for this
methods
If you want fixed size for LRU cache, you need to extend this class and
then you can specify the size of LRU cache
To remove the least recently used item there is this method called
remove eldest entry.
When insert item using put method or put all method that would
invoke this particular method.
In this class this method was always return false. If it return
false that means the eldest entry should not be removed by
invoking remove eldest entry at put and put All methods
By default the linked hash map that not removed the eldest
entry because we should remove eldest entry if the size is full
and linked hash map has unlimited size
To remove the eldest entry, you should extends this class and
override this method and if the size is full
3.7.4. Sorted Map
Sorted map allows sorting of mappings base on keys
this sorting of sorted map can be either due to
natural ordering: it means keys would implement comparable interface
comparator interface which provided during the creation of map
sorted map implement map interface so it inherit theirs methods
sorted map methods have many types like (theirs methods are very similar to methods of
sorted set):
range view:
sub map: it return map contain all mappings between “from Key” and
“to key”
head map: it return map contain all mapping between the first key an
“to key”
tail map: it return map contain all mappings between “from key” and
the last key
end points:
key set method: it return the set of all keys ordered according to the
sorting criteria
values method: it return collection of values in the sorted map and the
order of values is depending on the corresponding keys
entry set method: it return a set of all the mapping in the sorted map and
also they are sorted by keys
If you want to the returned array list to be modifiable, the you can do something like
this
You can pass dynamic parameters (coma separator list) to as list method
and it return array list of these input parameters
You can create array list with fixed size using as List and Array list class
by passing to as list method Array not initialized like this example
To string method: it takes an array and will return a string representation of all
the elements. Returns a string representation of the contents of the specified array.
The string representation consists of a list of the array's elements, enclosed in
square brackets ("[]").
it’s another version of sort method which takes array and comparator
implementation
binary search method:
it takes like parameters an array and the search item
it uses binary search algorithm it’s used for search the second parameter in
the first input array
the input array must be sorted otherwise the behavior is undefined
if the element in found, it would return the index in the array
otherwise it look at the index position where this element can be inserted
by resting the array sorted and it returns –(insertion point) – 1
copy of method:
it takes an array and length of the new array and it return copy of the input
array with specified length
if the input length greater than the input array, then it would copy all the
elements of the array and for all the remaining slots it would ad zero
fill method:
It basically takes an array and input parameter and fills (remplir) each
element from input array by second input parameter.
That’s a good way to initialize an array with only one element
Equals method:
It basically takes 2 arrays and would
It return true if the 2 input arrays are equals which means the arrays are the
same type, the same size and they have the exact same contents
Deep equals method:
It’s also equal method but it does deep equals which means if we have
nester arrays the it would still be able to compare the array
It not accept not single dimension only but also multi dimensions arrays
Because one single dimension is an object but this method accept
an array of object (for example Integer[] is an object not array of
object)
It takes like parameter 2 arrays of Object class
It returns true if arrays are deeply equals to one another
Deeply: is more appropriate for nested arrays
Parallelized Operations (from Java 8): it used if your system has multiple cores then
those methods can those cores in order to parallelism and efficiently
Parallel sort method:
It just like a sort method it takes an input array and it sort the elements
This method is beneficial only if large arrays ( more than 8192 element)
If the input array has size more than 1 >> 13 = 8192 it used multi core
system and it would be faster
If the input array has size less than 8192, it would use the sequential sort
method
sort method:
it just identically to sort method in Arrays class
it would just sort the elements of an input list
in this case the parameter is just a list
it would use natural ordering to sort the elements of the list which means
the elements of the list must implement comparable interface
swap method:
frequency method: it look at input parameter and tell how many time it present
in the input list
shuffle method
max/min method: return the max/min value from input list using natural ordering