Interviews

Sunday, 25 March 2018

KISS, YAGNI & DRY, 3 Principles to Simplify Your Life as a Developer

As software app developers you will agree with me that we face all type of scenarios; from the easiest to the most complex projects and solutions our clients want. But, why fall into the trap of design them in a more complex way than they really are?
In the years I have spent in this fantastic world of programming, I have seen some really complex codes that don’t make sense to me, in other words, I have seen code that only the programmer who writes it would know what to really do (and maybe God!). However, when one asks the programmer to explain what the code does, he or she would only say “Only God know that”. Don’t misunderstand me; I know there are scenarios when there are no easy ways to solve the issues, but why complex them more if you could solve them with a simpler way?
That is why I will try to let you know, or remember if you already know about them, some basic (but very powerful) principles that will make your life (and most of your collaborators life) easier.
KISS
Keep ISimple, Stupid!” – I would add some extra exclamation marks (!!!!) to try to keep this in your mind. The simpler your code is, the simpler it will be to maintain it in the future, and of course, if other people see it, they will thank you for that.
The KISS principle was coined by Kelly Johnson, and it states that most systems work best if they are kept simple rather than making them complex; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided.
My advice is to avoid using fancy features from the programming language you’re working with only because the language lets you use them. This is not to say that you should not use those features, but use them only when there are perceptible benefits to the problem you’re solving. With this premise, I introduce you to the next principle.
YAGNI
You Aren’t Gonna Need It” – Sometimes, as developers, we try to think a lot in the future of the project coding some extra features “just in case we need them” or “we will eventually need them”. Just one word… Wrong! I’ll repeat it this way: You didn’t need it, you don’t need it and in most of the cases… “You Aren’t Gonna Need It”.
YAGNI is a principle behind the extreme programming (XP) practice of “Do the Simplest Thing That Could Possibly Work”. Even when this principle is part of XP, it is applicable in all kind of methodologies and own processes of development.
When you feel an unexplained anxiety to code some extra features that in the moment are not necessary but you think they will be useful in the future, just calm down and see all the pending work you have at this moment.  You can’t waste time coding those features that maybe you will need to correct or change because they do not fit to what is needed, or in the worst scenario, they will not be used.
DRY
Don’t Repeat Yourself” – How many times do you see that there are similar codes, in different parts of a system. Well, this principle is formulated by Andrew Hunt and David Thomas in their book The Pragmatic Programmer that every piece of knowledge must have a single, unambiguous, authoritative representation within a system.  In other words, you must try to maintain the behavior of a functionality of the system in a single piece of code.
In the other hand, when the DRY principle is violated it is called as WET solutions, which is stand for either Write Everything Twice or We Enjoy Typing.
I know this principle is very useful, especially in big applications where they are constantly maintained, changed and extended by a lot of programmers. But you also should not abuse of DRYing all things you do, remember the first two principles KISS and YAGNI, in first place.
There are a lot of other principles and good coding practices of software development, but I believe that these three are the basics. Putting them into practice will rescue you and your team from a lot of headaches trying to maintain code that who knows how it works, who understands it at all and the most important … who wants to work with it!.

Saturday, 26 August 2017

Externalization in Java

Before going into what externalization is, you need to have some knowledge on what serialization is because externalization is nothing but serialization but an alternative for it and Externalizable interface extends the Serializable interface. Check Serialization article for information on serialization. Just as an overview, Serialization is the process of converting an object's state (including its references) to a sequence of bytes, as well as the process of rebuilding those bytes into a live object at some future time. Serialization can be achieved by an object by implementing a Serializable interface or Externalizable interface.
Well, when serialization by implementing a Serializable interface is serving your purpose, why should you go for externalization?
Good question! Serializing by implementing Serializable interface has some issues. Let's see one by one what they are.
  • Serialization is a recursive algorithm. What I mean to say here is, apart from the fields that are required, starting from a single object, until all the objects that can be reached from that object by following instance variables, are also serialized. This includes the super class of the object until it reaches the "Object" class and the same way the super class of the instance variables until it reaches the "Object" class of those variables. Basically, all the objects that it can read. This leads to a lot of overheads. Say, for example, you need only car type and license number but using serialization, you cannot stop there. All the information that includes a description of car, its parts, blah blah will be serialized. Obviously, this slows down the performance.

  • Both serializing and deserializing require the serialization mechanism to discover information about the instance it is serializing. Using the default serialization mechanism will use reflection to discover all the field values. Also, the information about class description is added to the stream which includes the description of all the serializable superclasses, the description of the class and the instance data associated with the specific instance of the class. Lots of data and metadata and again performance issue.

  • You know that serialization needs serialVersionUID, a unique Id to identify the information persisted. If you dont explicitly set a serialiVersionUID, serialization will compute the serialiVersionUID by going through all the fields and methods. So based on the size of the class, again serialization mechanism takes respective amount of time to calculate the value. A third performance issue.

  • Above three points confirm serialization has performance issues. Apart from performance issues,When  an object that implements Serializable interface, is serialized or de-serialized, no constructor of the object is called and hence any initialization which is done in the constructor cannot be done. Although there is an alternative of writing all initialization logic in a separate method and call it in the constructor and readObject methods so that when an object is created or deserialized, the initialization process can happen but it definitely is a messy approach.
The solution for all the above issues is Externalization. Cool. Here enters the actual topic.
So what is externalization?
Externalization is nothing but serialization but by implementing Externalizable interface to persist and restore the object. To externalize your object, you need to implement Externalizable interface that extends Serializable interface. Here only the identity of the class is written in the serialization stream and it is the responsibility of the class to save and restore the contents of its instances which means you will have complete control of what to serialize and what not to serialize. But with serialization the, identity of all the classes, its superclasses, instance variables and then the contents for these items is written to the serialization stream. But to externalize an object, you need a default public constructor.
Unlike Serializable interface, Externalizable interface is not a marker interface and it provides two methods - writeExternal and readExternal. These methods are implemented by the class to give the class a complete control over the format and contents of the stream for an object and its supertypes. These methods must explicitly coordinate with the supertype to save its state. These methods supersede customized implementations of writeObject and readObject methods.
How serialization happens? JVM first checks for the Externalizable interface and if object supports Externalizable interface, then serializes the object using writeExternal method. If the object does not support Externalizable but implement Serializable, then the object is saved using ObjectOutputStream. Now when an Externalizable object is reconstructed, an instance is created first using the public no-arg constructor, then the readExternal method is called. Again if the object does not support Externalizable, then Serializable objects are restored by reading them from an ObjectInputStream.
Lets see a simple example.
 import java.io.*;  
 public class Car implements Externalizable {  
   String name;  
   int year;  
   /*  
    * mandatory public no-arg constructor  
    */  
   public Car() { super(); }  
   Car(String n, int y) {  
      name = n;  
      year = y;  
   }  
   /**   
    * Mandatory writeExernal method.   
    */  
   public void writeExternal(ObjectOutput out) throws IOException {  
      out.writeObject(name);  
      out.writeInt(year);  
   }  
   /**   
    * Mandatory readExternal method.   
    */  
   public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {  
      name = (String) in.readObject();  
      year = in.readInt();  
   }  
   /**   
    * Prints out the fields. used for testing!  
    */  
   public String toString() {  
     return("Name: " + name + "\n" + "Year: " + year);  
   }  
 }  
 import java.io.*;  
 public class ExternExample {  
   public static void main(String args[]) {  
      // create a Car object   
      Car car = new Car("Mitsubishi", 2009);  
      Car newCar = null;  
      //serialize the car  
      try {  
        FileOutputStream fo = new FileOutputStream("tmp");  
        ObjectOutputStream so = new ObjectOutputStream(fo);  
        so.writeObject(car);  
        so.flush();  
      } catch (Exception e) {  
        System.out.println(e);  
        System.exit(1);  
      }  
      // de-serialize the Car  
      try {  
        FileInputStream fi = new FileInputStream("tmp");  
        ObjectInputStream si = new ObjectInputStream(fi);          
        newCar = (Car) si.readObject();  
      }  
      catch (Exception e) {  
        System.out.println(e);  
        System.exit(1);  
      }  
      /*   
       * Print out the original and new car information  
       */  
      System.out.println("The original car is ");  
      System.out.println(car);  
      System.out.println("The new car is ");  
     System.out.println(newCar);  
   }  
 }  

Wednesday, 18 February 2015

How HashMap works in Java

Dear reader, 
I have mentioned below topics in this blog article:
1) How HashMap works in Java.
2) Difference between HashMap and HashTable. 
3) At the end few programs having implementation of equals() and hashCode() method
   and impact of these methods on HashMap storage.

How HashMap works in Java is a very common question. Almost everybody who worked 
in Java knows what hashMap is, where to use hashMap or difference between 
hashTable and HashMap, then why this interview question becomes so special? Because of the breadth and depth 
this question offers. It has become very popular java interview question in almost any senior or mid-senior 
level java interviews.

Questions start with simple statement:
"Have you used HashMap before" or "What is HashMap? Why do we use it“
Almost everybody answers this with yes and then interviewee keep talking about common facts about HashMap like 
HashMap accept null while hashTable doesn't, HashMap is not synchronized, hashMap is fast and so on along with 
basics like its stores key and value pairs etc.
This shows that person has used HashMap and quite familiar with the functionality HashMap offers but interview 
takes a sharp turn from here and next set of follow up questions gets more detailed about fundamentals involved 
in HashMap. Interview here you and come back with questions like

"Do you Know how HashMap works in Java” or "How does get () method of HashMap works in Java"
And then you get answers like I don't bother its standard Java API, you better look code on java; I can find 
it out in Google at any time etc.
But some interviewee definitely answer this and will say "HashMap works on principle of hashing, we have put() 
and get() method for storing and retrieving data from HashMap. When we pass an object to put () method to store 
it on HashMap, HashMap implementation calls hashcode() method HashMap key object and by applying that hashcode on 
its own hashing function it identifies a bucket location for storing value object , important part here is HashMap 
stores both key+value in bucket which is essential 
to understand the retrieving logic. if people fails to recognize this and say it only stores Value in the bucket they 
will fail to explain the retrieving logic of any object stored in HashMap . This answer is very much acceptable and 
does make sense that interviewee has fair bit of knowledge how hashing works and how HashMap works in Java.
But this is just start of story and going forward when depth increases a little bit and when you put interviewee on 
scenarios every java developers faced day by day basis. So next question would be more likely about collision 
detection and collision resolution in Java HashMap e.g 

"What will happen if two different objects have same hashCode?”
Now from here confusion starts some time interviewer will say that since hashCode is equal objects are equal and 
HashMap will throw exception or not store it again etc. then you might want to remind them about equals and hashCode() 
contract that two unequal object in Java very much can have equal hashCode. Some will give up at this point and some 
will move ahead and say "Since hashCode () is same, bucket location would be same and collision occurs in HashMap, 
Since HashMap uses a linked list to store in bucket, value object will be stored in next node of linked list." 
Great this answer make sense to me though there could be some other collision resolution methods available this is 
simplest and HashMap does follow this. See the program at the end of this article.

But story does not end here and final questions interviewer ask like:
"How will you retrieve if two different objects have same hashcode?”
Hmmmmmmmmmmm
Interviewee will say we will call get() method and then HashMap uses keys hashCode to find out bucket location and 
retrieves object but then you need to remind him that there are two objects are stored in same bucket , so they will 
say about traversal in linked list until we find the value object , then you ask how do you identify value object 
because you don't value object to compare ,So until they know that HashMap stores both Key and Value in linked list 
node they won't be able to resolve this issue and will try and fail.

But those bunch of people who remember this key information will say that after finding bucket location , we will call 
keys.equals() method to identify correct node in linked list and return associated value object for that key in Java 
HashMap. Perfect this is the correct answer.

In many cases interviewee fails at this stage because they get confused between hashcode() and equals() and keys and 
values object in hashMap which is pretty obvious because they are dealing with the hashcode() in all previous questions 
and equals() come in picture only in case of retrieving value object from HashMap.
Some good developer point out here that using immutable, final object with proper equals() and hashcode() 
implementation would act as perfect Java HashMap keys and improve performance of Java hashMap by reducing collision. 
Immutability also allows caching there hashcode of different keys which makes overall retrieval process very fast and 
suggest that String and various wrapper classes e.g Integer provided by Java Collection API are very good HashMap keys.

Now if you clear all this java hashmap interview question you will be surprised by this very interesting question 
"What happens On HashMap in Java if the size of the Hashmap exceeds a given threshold defined by load factor ?". 
Until you know how hashmap works exactly you won't be able to answer this question.
if the size of the map exceeds a given threshold defined by load-factor e.g. if load factor is .75 it will act to 
re-size the map once it filled 75%. Java Hashmap does that by creating another new bucket array of size twice of 
previous size of hashmap, and then start putting every old element into that new bucket array and this process is 
called rehashing because it also applies hash function to find new bucket location. 

If you manage to answer this question on hashmap in java you will be greeted by "do you see any problem with resizing 
of hashmap in Java", you might not be able to pick the context and then he will try to give you hint about multiple 
thread accessing the java hashmap and potentially looking for race condition on HashMap in Java. 

So the answer is Yes there is potential race condition exists while resizing hashmap in Java, if two thread at the same 
time found that now Java Hashmap needs resizing and they both try to resizing. on the process of resizing of hashmap in 
Java, the element in bucket which is stored in linked list get reversed in order during there migration to new bucket 
because java hashmap doesn't append the new element at tail instead it append new element at head to avoid tail traversing. 
If race condition happens then you will end up with an infinite loop. though this point you can potentially argue that 
what the hell makes you think to use HashMap in multi-threaded environment to interviewer :). 
Never use it in multithreaded env. For MultiThreaded use ConcurrentHashMap.

I like this question because of its depth and number of concept it touches indirectly, if you look at questions asked 
during interview this HashMap questions has verified:
    Concept of hashing
    Collision resolution in HashMap
    Use of equals () and hashCode () method and there importance?
    Benefit of immutable object?
    race condition on hashmap in Java
    Resizing of Java HashMap

Just to summarize here are the answers which does makes sense for above questions:
How HashMAp works in Java
HashMap works on principle of hashing, we have put() and get() method for storing and retrieving object form hashMap.
When we pass an both key and value to put() method to store on HashMap, it uses key object hashcode() method to calculate 
hashcode and they by applying hashing on that hashcode it identifies bucket location for storing value object.
While retrieving it uses key object's equals method to find out correct key value pair and return value object associated 
with that key. HashMap uses linked list in case of collision and object will be stored in next node of linked list.
Also hashMap stores both key+value tuple in every node of linked list.

What will happen if two different HashMap key objects have same hashcode?
They will be stored in same bucket but no next node of linked list. And keys equals() method will be used to identify 
correct key value pair in HashMap.

In terms of usage HashMap is very versatile and can be mostly used hashMap as cache in electronic trading application. 
Since finance domain used Java heavily and due to performance reason we need caching a lot, HashMap comes very handy there.

For new insertions, If key is same i.e. equals by equals method in Java than hashcode would be same and value will be 
replaced but if key is not same i.e. not equal but hashcode is same (Which is possible in java) than bucked location 
would be same and collision would happen and second object will be stored on second node of linked list structure 
on Map. 

So key.equals() is used on both put() and get() method call if object already exits in bucked location on hashmap 
and that's why tuples contains both key and value in each node. In Java Object helpfully provides hashCode() and equals() 
so we know that any object will be usable as a key in a hashtable.

So the known risks when using HashMap data structure in multithreaded env are: HashMap internal index corruption 
and infinite looping, which can bring your JVM to its knees.

What JDK Map data structure is more suitable to handle concurrent read & write operations in a Java EE environment?
The answer is to use the ConcurrentHashMap, introduced in JDK 1.5, which provides Thread safe operations but no 
blocking get() operation; which is key for proper performance. This is the typical data structure used these 
days in modern Java EE container implementations.

JDK 1.5 introduce some good concurrent collections which is highly efficient for high volume, low latency system.

The synchronized collections classes, Hashtable and Vector, and the synchronized wrapper classes, 
Collections.synchronizedMap and Collections.synchronizedList, provide a basic conditionally thread-safe implementation 
of Map and List.
However, several factors make them unsuitable for use in highly concurrent applications -- their single collection-wide 
lock is an impediment to scalability and it often becomes necessary to lock a collection for a considerable time during 
iteration to prevent ConcurrentModificationExceptions.

The ConcurrentHashMap and CopyOnWriteArrayList implementations provide much higher concurrency while preserving thread 
safety, with some minor compromises in their promises to callers. ConcurrentHashMap and CopyOnWriteArrayList are not 
necessarily useful everywhere you might use HashMap or ArrayList, but are designed to optimize specific common situations. 
Many concurrent applications will benefit from their use.

So what is the difference between hashtable and ConcurrentHashMap , both can be used in multithreaded environment but 
once the size of hashtable becomes considerable large performance degrade because for iteration it has to be locked 
for longer duration.

Since ConcurrentHashMap indroduced concept of segmentation , how large it becomes only certain part of it get locked 
to provide thread safety so many other readers can still access map without waiting for iteration to complete.

In Summary ConcurrentHashMap only locked certain portion of Map while Hashtable lock full map while doing iteration.

HashMap can be synchronized by:
 Map m = Collections.synchronizeMap(hashMap);
 
Difference between HashMap and HashTable? 
1. The HashMap class is roughly equivalent to Hashtable, except that it is non synchronized and permits nulls. 
   (HashMap allows null values as key and value whereas Hashtable doesn't allow nulls).
2. HashMap does not guarantee that the order of the map will remain constant over time.
3. HashMap is non synchronized whereas Hashtable is synchronized.
4. Iterator in the HashMap is  fail-fast  while the enumerator for the Hashtable is not and throw 
ConcurrentModificationException if any other Thread modifies the map structurally  by adding or removing any 
element except Iterator's own remove()  method. But this is not a guaranteed behavior and will be done by 
JVM on best effort.
5) HashTable extends Dictionary interface which is quite old while hashmap extends Map interface.
6) hashtalbe doesn't have counterpart like ConcurrentHashMap.

Note on Some Important Terms:
1)Synchronized means only one thread can modify a hashtable at one point of time. Basically, it means that 
any thread before performing an update on a hashtable will have to acquire a lock on the object while others 
will wait for lock to be released.

2)Fail-safe is relevant from the context of iterators. If an iterator has been created on a collection object 
and some other thread tries to modify the collection object "structurally", a concurrent modification exception
will be thrown. It is possible for other threads though to invoke "set" method since it doesn't modify the 
collection "structurally". However, if prior to calling "set", the collection has been modified structurally, 
"IllegalArgumentException" will be thrown.

3)Structurally modification means deleting or inserting element which could effectively change the structure 
of map.


=================Other important points about use of Map============
1) If the Class's object which is getting stored in Map, overrites equals() and hashCode() method and
   hashCode() returns a fixed value like "45" or "10" and equals() return "true" for all cases as explained 
   in below coding then Map objects will be considered as Duplicate, so only one Pair will be shown as output.
   Check the MainMap.java program below:

//Person.java (overriding equals() and hashCode() method
public final class Person {
    final String name;
    public Person(String name){
        this.name=name;
    }
    @Override
    public boolean equals(Object obj) {
        //return super.equals(obj);
        return true;
    }
    @Override
    public int hashCode() {
        return 45;
    }
}

===========
//MainMap.java
import java.util.HashMap;
import java.util.HashSet;
import java.util.Set;

public class MainMap {
    public static void main(String[] args) {
        Person a=new Person("John");
        Person b=new Person("Deepak");
        Person c=new Person("John");
        Person d=new Person("John");

        System.out.println("a.hashCode():"+a.hashCode());
        System.out.println("b.hashCode():"+b.hashCode());
        System.out.println("c.hashCode():"+c.hashCode());
        System.out.println("d.hashCode():"+d.hashCode());

        HashMap map=new HashMap();
        map.put(a,"John");
        map.put(b, "Deepak");
        map.put(c,"John");
        System.out.println("Map contents: "+map);

        System.out.println("Trying to get object \"d\" : "+map.get(d));

        Set s=new HashSet();
        s.add(a);
        s.add(b);
        s.add(c);
        System.out.println("Set containing Person objects :"+s);
        //System.out.println(s.get(d)); //get method is not allowed in Set.
        System.out.println("Does Set contains Person d object \"d\": "+s.contains(d));  //false

        s.clear();
        Integer intObj1=new Integer("0");
        Integer intObj2=new Integer("0");
        s.add(intObj1);
        s.add(intObj2);
        System.out.println("Set containing Integer objects :"+s);

        s.clear();
        String stringObj1=new String("0");
        String stringObj2=new String("0");
        s.add(stringObj1);
        s.add(stringObj2);
        System.out.println("Set containing String objects :"+s);


        s.clear();
        StringBuffer stringBuffObj1=new StringBuffer("0");
        StringBuffer stringBuffObj2=new StringBuffer("0");
        s.add(stringBuffObj1);
        s.add(stringBuffObj2);
        System.out.println("Set containing StringBuffer objects :"+s);
        
        s.clear();
        Thread threadObj1=new Thread("0");
        Thread threadObj2=new Thread("0");
        s.add(threadObj1);
        s.add(threadObj2);
        System.out.println("Set containing Thread objects :"+s);
    }
}
==========
//Output:
a.hashCode():45
b.hashCode():45
c.hashCode():45
d.hashCode():45
Map contents: {Person@2d=John}   //Map shows only one entry.
Trying to get object "d" : John  //Value is printed, even though we have not copied this value to Map.
Set containing Person objects :[Person@2d]   //Set shows only one entry.
Does Set contains Persond object "d": true
Set containing Integer objects :[0]
Set containing String objects :[0]
Set containing StringBuffer objects :[0, 0]
Set containing Thread objects :[Thread[0,5,main], Thread[0,5,main]]


******************
However if you change the equals() method like below:
    @Override
    public boolean equals(Object obj) {
        return super.equals(obj);
        //return true;
    }

//Output:
a.hashCode():45   //Our defined hashCode
b.hashCode():45
c.hashCode():45
d.hashCode():45
Map contents: {Person@2d=John, Person@2d=Deepak, Person@2d=John} //Map shows 3 entries
Trying to get object "d" : null   //Null is printed.
Set containing Person objects :[Person@2d, Person@2d, Person@2d] //Set shows 3 entries
Does Set contains Persond object "d": false
Set containing Integer objects :[0]
Set containing String objects :[0]
Set containing StringBuffer objects :[0, 0]
Set containing Thread objects :[Thread[0,5,main], Thread[0,5,main]]

******************
Also if you change the equals() and hashCode() method like below:
    @Override
    public boolean equals(Object obj) {
        return super.equals(obj);
        //return true;
    }
    @Override
    public int hashCode() {
        return name.hashCode();
    }
//Output:
a.hashCode():2314539   //System defined hashCode.
b.hashCode():2043177526
c.hashCode():2314539
d.hashCode():2314539
Map contents: {Person@79c86a36=Deepak, Person@23512b=John, Person@23512b=John}  //Map shows 3 entries
Trying to get object "d" : null   //Null is printed.
Set containing Person objects :[Person@79c86a36, Person@23512b, Person@23512b]  //Set shows 3 entries
Does Set contains Persond object "d": false
Set containing Integer objects :[0]
Set containing String objects :[0]
Set containing StringBuffer objects :[0, 0]
Set containing Thread objects :[Thread[0,5,main], Thread[0,5,main]]

******************
Also if you change the equals() and hashCode() method like below:
    @Override
    public boolean equals(Object obj) {        
        return true;
    }
    @Override
    public int hashCode() {
        return name.hashCode();
    }    

//Output:
a.hashCode():2314539
b.hashCode():2043177526
c.hashCode():2314539
d.hashCode():2314539
Map contents: {Person@79c86a36=Deepak, Person@23512b=John}   //Map shows 2 entries with different name.
Trying to get object "d" : John        //Value is printed, even though we have not copied this value to Map.
Set containing Person objects :[Person@79c86a36, Person@23512b] //Set shows 3 entries
Does Set contains Persond object "d": true
Set containing Integer objects :[0]
Set containing String objects :[0]
Set containing StringBuffer objects :[0, 0]
Set containing Thread objects :[Thread[0,5,main], Thread[0,5,main]]

======================END=================

Saturday, 27 July 2013

List Of implicit objects Liferay JSP page 


On a normal JSP page, some objects are implicitly available. In addition, we can get several others in Liferay using the taglibs. But we don't know all. So, lets become a technical James Bond and investigate about it. :D Lets see about normal JSP first: These objects are created by the container automatically and the container makes them available to us. Since these objects are created automatically by the container and are accessed using standard variables; and that is why, they are called implicit objects. They are parsed by the container. They are available only within the jspService method and not in any declaration.


KISS, YAGNI & DRY, 3 Principles to Simplify Your Life as a Developer

As software app developers you will agree with me that we face all type of scenarios; from the easiest to the most complex projects and so...