On NAT, WebRTC and reverse engineering Facebook video chat

What is NAT?

Network address translation (NAT) is a method which allows changing the IP header to remap one IP address space into another while they are in transit across a router. This allows NAT routers to act as an “interface” between the public WAN Internet and a private (non-public) LAN. In essence they allow multiple nodes to access the internet as a single machine.

How do they work?

The NAT router memorizes each outgoing packet’s destination IP and port number and assigns the packet its own IP and one of its own ports for accepting the return traffic. This mapping of internal IP, internal port with the external port is recorded and used when the outside server responds to the internal host via the NAT. Thus, when the router receives a packet from an outside host, the only way the router knows which computer should receive the incoming packet is if one of the internal computers on the private LAN first sent data packets out to the source of the returning packets. Due to this property, net admins assume by some general security considerations by deploying a NAT:

  1. NATs effectively occlude/hide the internal network structure from the outside world.
  2. Nodes inside the private LAN can’t be addressed/accessed solely by efforts of some outside host i.e. they require some initiation from an internal host.

Readers are encouraged to view [7] to gain more information about NATs.

In [8], Johannes Weber provides a discussion why these considerations should not be assumed since NATs will fail to continue to isolate the internal workings/structure of the network once the adversary has gotten access to a single host on the internal network (which these days is quite possible using attacks like social engineering, phishing emails, or malware attacks to name a few). Furthermore he mentions how internet trackers are able to infer details about the internal network by distinguishing browser queries from different nodes.

We further expand on this by showing how WebRTC leaks information that effectively enables outside devices to gain insights into internal LAN and some potential attack vectors. Finally we show that these techniques (which are inherently vulnerable*) are deployed at a large scale by companies like Facebook to provide video chat services.

* we call them vulnerable as the final effect might not have been the initial intention of the clients

Overview of WebRTC

WebRTC allows peer to peer connection amongst the clients, it uses relay server for signalling of meta information and to work their way around NATs and firewalls [9]. WebRTC uses ICE (Interactive Connectivity Establishment) [4] to find the best path to connect peers. It tries all possibilities in parallel and chooses the most efficient option that works. As an overview, ICE first tries to make a connection using the host address (private end point) obtained from a device’s operating system and network card; if that fails ICE obtains an external address (public endpoint) using a STUN server, and if that fails, traffic is routed via a TURN relay server. (end point refers to IP:Port pair)

Going into the details

In the first approach, ICE exposes the private endpoint (obtained from OS and network card) of the hosts in an attempt to check if a direct connection exists between the hosts, this is the case if one of the hosts has a public IP or both the hosts are behind the same NAT. This technique goes against the philosophy that outside devices are unable to gain information about the private LAN behind the NATs since the application using WebRTC in this case got information that the two communicating hosts are behind the same NAT along with their private endpoints. The application can further obtain information by polling hosts based on the private IP pattern and/or cause other attacks like denial-of-service from inside the private LAN.

The same can also be used to attack systems which use 2 NATs to create a DMZ (demilitarised zone) [7]. A connection can be established between a host in the highly protected intranet and the host in the DMZ using the private endpoint of the DMZ. When the DMZ is compromised there will be connection from the DMZ to the internal host and thus an adversary can penetrate into the internal LAN via the DMZ. (Technically speaking both of these scenarios are realised since a host behind the NAT contacted the application first; at the same time revealing the private endpoint may not have been its intention).


In the second approach used by WebRTC, the internal host first contacts a public STUN server this creates ephemeral adhoc port mappings in the NAT, the STUN server knows the IP, port from which it received the connection and provides these to the other client and then the other client then establishes connection to the internal host using these ephemeral port mappings. This technique is known as UDP hole punching [2]. In this case if the STUN server is compromised, the internal LAN will be exposed.

NOTE: in the case when the 2 clients are behind 2 NATs each, in which one of the NATs is common; peer-to-peer connection is only established if the common NAT supports hairpinning [5]. Refer (section 3.5 of [1]).

Blackbox study of Facebook video chat

Two experiments were conducted to see how Facebook uses the above constructs:

Experiment 1: Two hosts one behind a single NAT and other behind two NATs (the single NAT is common). This mimics the case where one node is the DMZ and other is in the highly secure intranet.

Experiment 2: Two hosts behind different NATs along with individual proxy servers trying to connect to each other.

In experiment 1, facebook initiated a STUN request from the node in the highly secure intranet ( to create a connection with the node in the DMZ (


In experiment 2, we found an attempt to request the STUN server ( facebook’s public STUN server located in Menlo Park, CA) from both the hosts. This attempt failed since the hosts were behind a proxy server and STUN request overlooked this detail. The initial connection was made to the browser (messenger chat engine) through the proxy but the STUN request did not take this into consideration, Facebook could have added proxy support to the STUN request in order to achieve peer-to-peer connection in this scenario.


Finally the connection was established via the TURN relay server. NOTE: is the private IP of the proxy server in the experiment.


To conclude, NATs should not be relied upon to provide security in terms of occlusion of internal network architecture nor to create DMZs.


[1] Bryan Ford, Pyda Srisuresh, and Dan Kegel. 2005. ​Peer-to-peer communication across network address translators. ​ In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC ’05). USENIX Association, Berkeley, CA, USA, 13-13.
[2] https://en.wikipedia.org/wiki/UDP_hole_punching
[3] https://en.wikipedia.org/wiki/STUN
[4] https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment
[5] https://en.wikipedia.org/wiki/Hairpinning
[6] https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT
[7] https://www.grc.com/nat/nat.htm
[8] https://blog.webernetz.net/why-nat-has-nothing-to-do-with-security/
[9] https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/


Compiling GCC-4.1.2 – 64 bit Ubuntu 16.04.1 – 2017 (for PintOS)

The steps to be followed are:

  1. Download the source code and extract
    mkdir /tmp/gcc
    cd /tmp/gcc
    wget http://ftp.gnu.org/gnu/gcc/gcc-4.1.2/gcc-4.1.2.tar.bz2
    tar -xvjpf ./gcc-4.1.2.tar.bz2
    mkdir ./build
  2. Install dependencies:
    sudo apt-get install linux-headers-$(uname -r) zlib1g zlib1g-dev zlibc gcc-multilib
  3. You would be having a different version of ld than what the gcc-4.1.2 build expected, hence following changes need to be done:
    Change line 8284 of  ./gcc-4.1.2/libstdc++-v3/configure from 

    sed -e 's/GNU ld version \([0-9.][0-9.]*\).*/\1/'`


    sed -e 's/GNU ld (GNU Binutils for Ubuntu) \([0-9.][0-9.]*\).*/\1/'`

    Also make the following links:

    sudo ln -s /usr/lib/x86_64-linux-gnu/crt1.o /usr/lib/crt1.o
    sudo ln -s /usr/lib/x86_64-linux-gnu/crti.o /usr/lib/crti.o
    sudo ln -s /usr/lib/x86_64-linux-gnu/crtn.o /usr/lib/crtn.o
  4. Now change to /tmp/gcc/build created in first step and execute:
    ../gcc-4.1.2/configure --program-suffix=-4.1 --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --disable-libunwind-exceptions --enable-__cxa_atexit --enable-languages=c,c++ --disable-multilib
    sudo apt-get install texinfo​​​​​​

    Modify ./Makefile as follows:

    CC = gcc
    CXX = g++


    CC = gcc -fgnu89-inline
    CXX = g++ -fgnu89-inline
  5. I did not face any errors in above configure command, now make and `​make install`
    make -j 2 bootstrap MAKEINFO=makeinfo
    sudo make install

    fortunately make will run as expected and this will create a gcc-4.1 binary in the $PATH

  6. Update the Make.config in $PINTOSSRC/pintos/src to use gcc-4.1 instead of gcc.


my uname -a  gives:  Linux jarvis 4.10.0-35-generic #39~16.04.1-Ubuntu SMP Wed Sep 13 09:02:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

On Makefiles

This presents an intermediate-level introduction to writing ‘generic-looking’ Makefiles. We shall directly dig into a decoding what the following Makefile is doing (refer to [1] before this if you are new to Makefiles):

NOTE: many of the explanations involving shell commands and self-explanatory statements have been omitted for the sake of brevity.

## Makefile supports following:
# make all # or link   - to compile and link
# make build           - to build only
# make clean           - to delete all output files
# make cleanall        - to delete output files as well as TAGS and cscope files
# make TAGS            - to create emacs TAGS
# make cscope          - to create cscope files
# make dependencies    - to install dependencies (debian system)

CC = gcc
LD = ld

SRCDIR := src
INCDIR := inc
BUILDDIR := build
DEPDIR := deps


WARNINGS = -Wall -W -Wstrict-prototypes -Wmissing-prototypes -Wsystem-headers
LDLIB = -lreadline

SOURCES = $(shell find $(SRCDIR) -type f -name "*.c")
OBJECTS = $(patsubst $(SRCDIR)/%,$(BUILDDIR)/%,$(SOURCES:.c=.o))
DEPS = $(patsubst $(SRCDIR)/%,$(DEPDIR)/%,$(SOURCES:.c=.d))

POSTCOMPILE = @mv -f $(DEPDIR)/$*.Td $(DEPDIR)/$*.d && touch $@

$(shell mkdir -p $(DEPDIR) > /dev/null)
$(shell mkdir -p $(BUILDDIR) > /dev/null)
$(shell mkdir -p $(OUTPUTDIR) > /dev/null)

all: build link

  sudo apt-get install libreadline-dev

build: $(OBJECTS)
link: $(TARGET)

.PHONY: clean TAGS install cscope


%.o: %.c
$(BUILDDIR)/%.o: $(SRCDIR)/%.c $(DEPDIR)/%.d
  $(COMPILE.c) $(OUTPUT_OPTION) $<               

$(DEPDIR)/%.d: ; 

-include $(DEPS)     

  find . -name "*.[chS]" | xargs etags -a     

  find . -name "*.[chS]" > cscope.files
  cscope -b -q -k


cleanall: clean
  $(RM) -v cscope.* TAGS

  @echo 'make <all, clean, build, link, dependencies, TAGS, cscope>'

The first part initialises the directory variables, to be used later.
‘:=’ definitions are expanded there and then while ‘=’ definitions are expanded whenever the variable is used. Therefore, if you want to expand the same variable differently on different invocations use ‘=’. Generally, ‘:=’ is used for constant definitions and ‘=’ otherwise.

At this point it is also good to NOTE that GNU Make has a number of built-in rules including:

%.o: %.c
%: %.o
	$(LINK.o) $^ $(LOADLIBES) $(LDLIBS) -o $@

One can see that the definitions above conform to the built-in rules. We might not want all the rules to be used in our implementation so we might have to ‘override’ them (this is coming soon).

$VPATH is used by GNU Make to search for sources, example if we have a rule:
foo.o : foo.c, is foo.c is not found is the current directory then make will try to find foo.c in directories referred by $VPATH.

pathsubst is used for substitution of a pattern in a string, its format is as follows:
$(pathsubst pattern, replacement, text or string)

$(SOURCES:.c=.o) is shorthand for pathsubst basically
$(text or string : pattern = replacement)

OBJECTS = $(patsubst $(SRCDIR)/%,$(BUILDDIR)/%,$(SOURCES:.c=.o))

translates to:

OBJS = $(SOURCES:.c=.o)
OBJECTS = $(patsubst $(SRCDIR)/%,$(BUILDDIR)/%, $OBJS)

$@: target of a rule
$^: dependencies of a rule
$<: first dependency of a rule
$*: string that matched the wildcard ‘*’

NOTE: if we use *.c and there is no .c file then make will raise errors, in these situations it is preferred to use $(wildcard expression) this remove the wildcard from the expression if no match is found.

mkdir: -p option creates a directory with the full path, creating intermedite directories wherever required.

.PHONY targets are those which don’t refer to targets as files, they are there to avoid confusing between the project files and the make targets, example you might have a file clean in your project, you don’t want make to meddle with it.

NOTE: by default all the shell commands written in the make file will be echoed while running make, to suppress them use ‘@’ prefix. eg:

@echo 'make <all, clean, build, link, dependencies, TAGS, cscope>'

Handling Dependencies:

Modern compilers today have support of writing make rules which conform to the headers included in the source files. We can make use of them to automate the Makefile generation.

For this the following options are used:

-MT $@
Set the name of the target in the generated dependency file. (By default the target will be make of the source file with the suffix replaced as .o and dropped directory or path prefix)
Generate dependency information as a side-effect of compilation, not instead of compilation. (-M and -MM options only generate dependency information and don’t compile (i.e. they use -E option by default which means preprocess only)). This version omits system headers from the generated dependencies: if you prefer to preserve system headers as prerequisites, use -MD.
Adds a target for each prerequisite in the list, to avoid errors when deleting files.
This will add rules of the form %.h: ; So that if you delete the header file, make will continue without raising error that header not found and in the subsequent make invocation the header will be replaced from dependency list. Similarily, $(DEPDIR)/%.d: ; is present so that make does not run into errors when .d files are not present or deleted. 
-MF $(DEPDIR)/$*.Td
Write the generated dependency file to a temporary location $(DEPDIR)/$*.Td. We are writing temporarily to .Td files and then renaming .Td to .d files in POSTCOMPILE step. This is done as, make might be terminated in between and we might end up with broken dependency files, we don’t want that.

.PRECIOUS is to tell make that %.d files are to retained if make runs into errors. By default make will delete any intermediate files if make fails to complete execution.

-include $(DEPS), the include statement is similar to #include directive in C/C++. The ‘-‘ indicates that don’t include if the file is not present.

[1] http://web.mit.edu/gnu/doc/html/make_2.html
[3] http://make.mad-scientist.net/papers/advanced-auto-dependency-generation/
[3] http://bruno.defraine.net/techtips/makefile-auto-dependencies-with-gcc/#comment-50775

Appendix A:
Directory structure inferred by the Makefile:

├── bin
│   └── shell
├── build
│   └── *.o
├── deps
│   └── *.d
├── inc
│   └── *.h
├── Makefile
└── src
    └── *.c

Bochs for pintos – IITG OS LAB CS342

To run the pintos code locally you will need a local installation of the bochs emulator. Unfortunately the bochs installation procedure mentioned in PintDoc does not work.

We tried the following and this seems to work for us:

    1. Download bochs-2.5.1 (as installed on progsrv) (https://sourceforge.net/projects/bochs/files/bochs/2.5.1/bochs-2.5.1.tar.gz/download)
    2. Next extract and cd
    3.  run
      $ ./configure --with-nogui --enable-gdb-stub
      $ make
      $ sudo make install
    4. just test using:
      $ pintos run alarm-multiple

While compiling pintos locally you might run into errors while running make in the utils folder, the make file has some deprecation. If the error is:

function `main’:
setitimer-helper.c:(.text+0xc9): undefined reference to `floor’
collect2: error: ld returned 1 exit status
: recipe for target ‘setitimer-helper’ failed
make: *** [setitimer-helper] Error 1

then just run: $ gcc setitimer-helper.o -lm -o setitimer-helper

How you can master the art of ‘Influencing People’

No matter who we are, all of us at some point in time or the other have faced a hard time convincing people towards our expectations/desires, irrespective of our position. This article shall help you get acquainted with the critical traits necessary to change your attitude and lifestyle which can bring you in the good books of most people. At this point, I would like to acknowledge Dale Carnegie, big shot author and lecturer in self-improvement and public speaking, whose books have had a strong impact on me and my lifestyle and eventually led to this article.

Most of this post will be a short summary of the top selling book by Carnegie, ‘How to Win Friends and Influence People’. Most of the tactics might sound pretty ordinary but mind it, if one puts in a conscious effort to imbibe them in their day to day life, it will have a tremendous impact on their relations and overall outlook. Being an engineer in training, I naturally prefer summarized lists over long paragraphs so I choose to the following methodology to deliver my message.

  1. “People don’t like criticism up front”, whenever some one tells us we are wrong, we go into a defensive mode, even after realizing our mistake we tend to stand our ground since we feel our worth has been associated with the particular philosophy. So, if you want someone to change don’t even think about criticizing them, rather try to show them how a new perspective can be derived from their current state and that the change you want to bring has come from their own thought process.
  2. “Everyone loves a compliment”, this goes back to my endeavors of getting a project with a professor at my university, my dear friend was one of the first, who introduced me to the effectiveness of paying compliments. We met with this busy-busy professor who wouldn’t even listen to requests of undergraduates let alone consider them for an ongoing project. Now my friend here is a son of a stellar businessman and has acquired quite a few techniques himself in this art. As soon as we entered his room and told that we were undergrads looking for opportunities we were told to abstain from wasting his time and leave his office immediately. My good friend then went on to butter this busy professor, congratulating him on his success in a recent conference and for his recent publications, next thing you know we are talking about common interests and his current project openings. I had never realized that an accomplished professor as he was would be flattered by such teensy appreciations and cater to our simple requests.

This is a running article, I will keep updating it as and when I find the time. This strategy of small updates to articles keeps them fresh and short and thus are more likely to be read and also fit into my schedule.

Continuation to My Thoughts: “My Gita” – Devdutt Pattanaik

Original Post: https://108foundation.wordpress.com/2017/02/08/my-thoughts-on-my-gita-by-devdutt-pattanaik/

  • The third chapter delves into enlightening the reader regarding exploring their uniqueness, It tells the reader that everyone experiences life in a different fashion. The author compares plants, animals and humans w.r.t the realities they are exposed to: sensual, conceptual (imagination and intelligence), emotional. He divides truth into 3 categories Reality (Everybody’s Truth), Myth (Somebody’s Truth) and Fantasy (Nobody’s Truth) and how one can fool the mind i.e. even though one might be in pain or frustrated they can simulate a feeling of joy and happiness by mere imagination. He explains a major psychological observation that our emotional state affects what we observe and what we observe affects our emotional state. Then in a very subtle manner he relates zero and infinity with human behavior and thoughts that the hermit (sanyasi) prefers the concept of withdrawal into oblivion, the householder embraces everything hence infinity. At this point I can’t resist my self from writing this dialogue between Krishna and Arjuna:

    Krishna: Arjuna, immerse your mind in me and I will uplift you from the ocean of recurring death. If you cannot so that, then practice yoga and work on your mind. If you cannot do that then so your work as if it is my work. If you can’t do that, then make yourself my instrument and do as I say. If you cannot do that then simply do your job and leave the results to me. (Ch-12 verse 6-11 (Bhagavad Gita))

  • The 4th chapter deals about the human yearning for meaning for their sanity. The author describes the five container architecture of the body as described in Upanishads: our breath resides in our flesh, our mind withing out breath, our concepts within our mind and emotions within our concepts. We sense emotions as they are expressed through the body and breath. He says that only when there is conceptual clarity do we have experience tranquility and devoid it we feel fear, fear of loosing opportunities, fear of threats, fear of achievement, of abandonment and invalidation. Thus, as long as we seek validation from outside we are entrapped by aham and as we realize that all meaning comes from within and that it is us who adds meaning to the world it is then that we are liberated by atma.
  • The 5th Chapter is about facing consequences and as always Krishna advises Arjuna to follow the householder path than being the hermit, he says that one can’t attain freedom from simply withdrawing from the society, this idea in itself is very contrasting to the ideologies of Buddhism which is all about surrendering and withdrawing into oblivion. He mocks the person who controls their senses while having a mind full of cravings. We have to face the consequences of our actions, like Dumbledore tells Harry,

    “It is our choices, Harry, that show what we truly are, far more than our abilities.”

    Whatever we do has immediate results and long term repercussions. We reap what we sow and what we sow is in accordance with what we had reaped.

My Thoughts on “My Gita” by Devdutt Pattanaik

The book My Gita is Devdatt Pattnaik’s impression of the Bhagvad Gita (The Hindu Scripture) (commonly known as Gita). The Gita is a narration by Sanjaya (Dhritarashtra’s adviser who had infinite sight) to Dhritarashtra (blind king of Hastinapur (capital of Kuru Empire (over which the epic war of Mahabharata was fought))) of the talks between Krishna and Arjuna. Unlike The Bhagvat Gita, Devdutt has tried to write “My Gita” in a fashion such that the reader can glance through in a sequential manner which really helps if you are starting to know about theism, Hinduism or philosophy as it does not assume any prerequisite knowledge.

The book was a great read, it helped me get great clarity w.r.t future endeavors, relations, conduct and philosophy.

It starts with demythifying the stories pertaining to the Hindu Religion and brings contrasts the believes of people through the ages and portrayed how they changed drastically. This was quite astonishing as there is a vast difference between what was prevalent and what is prevalent. For the exact details I recommend you to go through the starting chapters.  The author also establishes a clear idea of a god and differentiates between human, animal, deva, asura and bhagvan and this was again different from the common societal conventions.

Thereafter starts the 18 chapters (the Mahabharata war was 18 days long (the original Gita has 1 chapter for each day and so does My Gita)).

  • The first chapter introduces the concept of darshan and that as long as we judge we cannot see the world for what it is and that this is implicit in Hinduism (there is no judgement day (Qayamat) in Hindu Mythology unlike Abrahamic religions). The author describes the life as ranga-bhoomi (performance on stage aimed towards nourishing and comforting the others and deriving the same from their delights).
  • The author then introduces the Hindu concept of rebirth and discrepates between dehi (atma) and deha (body). With this there is no fear from death, no longing for validation. The 2nd chapter then deals with how our actions are a product of our reactions to our’s and others’ circumstances which in turn are generated through our reaction and so we solely are responsible for our problems, and that what we do in this lifetime we shall bear its fruit later.Stay tuned for thoughts on other chapters…

Emacs and GDB Ice Breaker

I started using emacs recently; earlier I used editors like sublime, code blocks, php storm depending on purpose. The man reason was the crappy nature of code blocks, the debugger GUI ran into screen tears, icons tend to go missing and what not, so I was looking for various gui debuggers and found many cool ones which I’d like to mention: https://github.com/cyrus-and/gdb-dashboard , ddd, emacs with gdb-many-windows. Emacs seemed the best out of them considering the added navigation recording… benefits.

First beginning with installing emacs, download the source from https://www.gnu.org/software/emacs/download.html, for reference at the time of composing this post one would download emacs-25.1.tar.xz .

Then extract the archive and open a terminal and execute $ ./configure. If you run into errors like header dependencies missing the execute
$ sudo apt-get build-dep emacs24. If it says need to add sources then you need to un-comment the deb-src lines in /etc/apt/sources.list . Alternatively you would include the sources in the software settings.

If you are on Ubuntu 16.10 you would have to do
$ ./autogen.sh
$ ./configure CFLAGS=-no-pie # to avoid errors later
After $ ./configure do $ make and $ sudo make install

Launch emacs. I should warn you that it has a steep learning curve, I am not covering the basics here because there are already many places for that, for starters you could watch the series https://www.youtube.com/watch?v=ujODL7MD04Q . Having done that you could modify your config for what suits you best.

My ~/.emacs stands as:

(require ‘package) ;; You might already have this line
(add-to-list ‘package-archives
‘(“melpa” . “https://melpa.org/packages/&#8221;))
(when (< emacs-major-version 24)
;; For important compatibility libraries like cl-lib
(add-to-list ‘package-archives ‘(“gnu” . “http://elpa.gnu.org/packages/&#8221;)))
(package-initialize) ;; You might already have this line

(setq c-default-style “linux”
c-basic-offset 4)

(setq url-proxy-services
‘((“no_proxy” . “^\\(localhost\\|10.*\\)”)
(“http” . “”)
(“https” . “”)))

(add-to-list ‘load-path “~/.emacs.d/lisp/”)
(require ‘auto-complete)
(require ‘sr-speedbar)
(require ‘multiple-cursors)
(global-set-key (kbd “C-S-c C-S-c”) ‘mc/edit-lines)

(global-set-key (kbd “C->”) ‘mc/mark-next-like-this)
(global-set-key (kbd “C-<“) ‘mc/mark-previous-like-this)
(global-set-key (kbd “C-c C-<“) ‘mc/mark-all-like-this)

You could further read about gdb-many-windows (Google Search)

Moving on to gdb, the basics are I think covered best in http://stanford.edu/~adyuen/gdb
Read that tryout debugging some programs, some of the things worth mentioning are the commands: display, until, using checkpoints and recording which come in handy many times, all of whose details you can find online.

Getting Nvidia Drivers working on MSI Apache Pro GE62 6QF (Ubuntu 16.04 with Gnome and LightDM)

Update (Dec 2016): This link helps install nVidia drivers along with bumblebee. This worked for me on the same machine.

Firstly You want to use Propitiatory Microcode Drivers to use Intel HD Graphics 530 Properly from additional Drivers then use nVidia for 3D Graphics.

To install follow one of the following methods: (in order of preference) (Note: To use nVidia Graphics – Secure Boot must be Disabled from BIOS Setup)

Method 1: Just Go to Additional Drivers and select the nVidia driver you want

Method 2: Download .run file from nVidia website and then install (include the 32bit libs when asked for)
$ sudo chmod u+x file.run
$ sudo ./file.run or $ sudo sh file.run

Method 3: $ sudo apt-get nvidia-current
(after $ sudo add-apt-repository ppa:graphics-drivers/ppa and
$ sudo apt-get update)

Update /etc/default/grub as follows:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash i915.modeset=1 nouveau.blacklist=1
then $ sudo update-grub

Also make sure your distro is up to date:
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
you can update your Kernel via: https://www.quora.com/How-do-we-update-kernel-in-Ubuntu-by-command-line-in-terminal

Common Problems:

If you are not even able to go ahead from login screen after fresh install:
Disable Secure Boot, use nomodeset parameter in boot options.

Installed nVidia Drivers but stuck on login loop:
Either uninstall drivers and install as mentioned above or just try updating the grub file as mentioned above or booting with appropriate boot parameters.
Use tty 1-6 for updating grub file (Ctrl+Alt+F1 – F6)
For adding boot parameters on grub screen press E and then you will find a line in which quiet and nosplash would be written modify it appropriately then F10 to boot.

To remove driver just do $ sudo apt-get remove –purge nvidia* (does not work in zsh use bash) OR use $ sudo file.run –uninstall if installed via .run file.



New Learnings: Custom Comparator with STL, const int * const ptr, reference to pointer to const, Operator Overloading

Today, while implementing Huffman Codes (http://www.geeksforgeeks.org/greedy-algorithms-set-3-huffman-coding/), I learned a few new Things which I thought must be shared:

  1. Adding a custom comparator with STL:
  2. Also in this I learned when and why and how you can overload operators in classes:
  3. Also we can’t overload operator= using friend : http://stackoverflow.com/questions/2865036/why-cant-we-overload-using-friend-function
  4. Why Operator Overloading doesn’t work for pointers: http://stackoverflow.com/questions/6171630/why-isnt-operator-overloading-for-pointers-allowed-to-work
    Also when overloading operators with friend function, one of them arguments must be an instance of the class or reference to it.
  5. The Differences between const int *, int * const, int const * const:
  6. Why cant I pass a non const pointer to a function taking a reference to a pointer to a const as its argument