<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Avabodha]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://avabodha.in/</link><generator>Ghost 4.9</generator><lastBuildDate>Mon, 30 Mar 2026 04:41:35 GMT</lastBuildDate><atom:link href="https://avabodha.in/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Networking Project in C - Chat Room]]></title><description><![CDATA[<p>In this tutorial, we will explore how to create a chat room using the C programming language. We will start by implementing the server, which will allow multiple clients to connect and exchange messages in real time. By following this step-by-step guide, you will gain a better understanding of networking</p>]]></description><link>https://avabodha.in/networking-project-in-c-chat-room/</link><guid isPermaLink="false">649ac94785e1c5050d8a398d</guid><category><![CDATA[c-programming]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Tue, 27 Jun 2023 16:27:45 GMT</pubDate><media:content url="https://avabodha.in/content/images/2023/06/chat-room-3.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2023/06/chat-room-3.png" alt="Networking Project in C - Chat Room"><p>In this tutorial, we will explore how to create a chat room using the C programming language. We will start by implementing the server, which will allow multiple clients to connect and exchange messages in real time. By following this step-by-step guide, you will gain a better understanding of networking in C and learn how to build a simple chat application.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2023/06/image.png" class="kg-image" alt="Networking Project in C - Chat Room" loading="lazy" width="741" height="341" srcset="https://avabodha.in/content/images/size/w600/2023/06/image.png 600w, https://avabodha.in/content/images/2023/06/image.png 741w" sizes="(min-width: 720px) 720px"></figure><p>Full code can be found at <a href="https://github.com/lets-learn-it/c-learning/tree/master/16-networking/02-chat-room-PROJECT">https://github.com/lets-learn-it/c-learning/tree/master/16-networking/02-chat-room-PROJECT</a></p><h2 id="setting-up-server">Setting up server</h2><p>The server will handle client connections, message reception, and message broadcasting.</p><pre><code class="language-c">#include &lt;stdio.h&gt;
#include &lt;unistd.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;arpa/inet.h&gt;
#include &lt;sys/socket.h&gt;
#include &lt;string.h&gt;
#include &lt;sys/time.h&gt;
#include &lt;errno.h&gt;

#define MAX_CLIENTS 4

int main(int argc, char const *argv[]) {
  int mastersockfd, connfds[MAX_CLIENTS];
  int activeconnections = 0;

  // Rest of the code...
}</code></pre><p>The code starts by including necessary header files and defining the maximum number of clients (<code>MAX_CLIENTS</code>) that can connect to the server. We are initializing <code>mastersockfd</code> which will listen for new connections and <code>connfds</code> is an array of all live TCP connections.</p><p>We will create a socket using the <code>AF_INET</code> address family (IPv4) and <code>SOCK_STREAM</code> protocol (TCP). It returns a file descriptor for the socket. If the socket creation fails, an error message is printed, and the program exits.</p><pre><code class="language-c">if((mastersockfd = socket(AF_INET, SOCK_STREAM, 0)) &lt;= 0) {
  perror(&quot;error while creating socket...&quot;);
  exit(1);
}</code></pre><p>Next, we configure the server&apos;s address using the <code>sockaddr_in</code> structure. We set the address family, and port number (8081 in this example), and bind it to any available network interface (<code>INADDR_ANY</code>).</p><pre><code class="language-c">serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(8081);
serv_addr.sin_addr.s_addr = INADDR_ANY;</code></pre><p>To ensure that the socket can be reused immediately after the server is stopped, we set the <code>SO_REUSEADDR</code> option to use the <code>setsockopt</code> function. More can be found on <a href="https://stackoverflow.com/questions/2208581/socket-listen-doesnt-unbind-in-c-under-linux">https://stackoverflow.com/questions/2208581/socket-listen-doesnt-unbind-in-c-under-linux</a></p><pre><code class="language-c">int opt = 1;
setsockopt(mastersockfd, SOL_SOCKET, SO_REUSEADDR, (void *) &amp;opt, sizeof(opt));
</code></pre><p>The <code>bind</code> function binds the socket to the server&apos;s address. If the binding fails, an error message is printed, and the program exits.</p><pre><code class="language-c">if(bind(mastersockfd, (struct sockaddr *) &amp;serv_addr, addrlen) &lt; 0) {
  perror(&quot;bind failed...&quot;);
  exit(1);
}</code></pre><p>We use the <code>listen</code> function to listen for incoming client connections on the socket. The second argument (3 in this case) specifies the maximum number of pending connections that can be queued.</p><pre><code class="language-c">if(listen(mastersockfd, 3) &lt; 0) {
  perror(&quot;Listen failed ...&quot;);
  exit(1);
}</code></pre><h3 id="asynchronous-io-multiplexing">Asynchronous I/O Multiplexing</h3><p>As we will be dealing with multiple clients at the same time, How can we know which connection is ready with new data? We want to be notified when the connection is ready with new data or ready for reading. This capability is called <strong>I/O multiplexing. </strong>and is provided by <code>select</code> and <code>poll</code> functions. More on this can be read at <a href="https://notes.shichao.io/unp/ch6/">https://notes.shichao.io/unp/ch6/</a>.</p><p>We will be using <code>select</code> for working with asynchronous I/O multiplexing. &#xA0;To begin with, we need to define the necessary variables. The <code>fd_set</code> structure is used to hold the file descriptors. We declare <code>readfds</code> as an <code>fd_set</code> to keep track of the file descriptors available for reading. We also define variables <code>max_fd</code> and <code>readyfds</code>.</p><pre><code class="language-c">fd_set readfds;
int max_fd, readyfds;</code></pre><p>In infinite <code>while</code> loop, we will continuously check for incoming connections and messages from clients. At the start of each iteration, we clear the <code>readfds</code> set using <code>FD_ZERO</code> to remove any previously set file descriptors. Next, we add the master socket (<code>mastersockfd</code>) to the <code>readfds</code> set using <code>FD_SET</code>. The master socket is the socket that listens for new client connections. We also update the <code>max_fd</code> variable with the maximum file descriptor value. We iterate through the array of active client sockets (<code>connfds</code>) and add each valid socket to the <code>readfds</code> set using <code>FD_SET</code>. We also update the <code>max_fd</code> value if necessary.</p><pre><code class="language-c">FD_ZERO(&amp;readfds);

// add mastersockfd
FD_SET(mastersockfd, &amp;readfds);
max_fd =  mastersockfd;

for(int i=0;i &lt; activeconnections;i++) {
  if(connfds[i] != 0)
  	FD_SET(connfds[i], &amp;readfds);
  if(connfds[i] &gt; max_fd)
  	max_fd = connfds[i];
}</code></pre><p>We call the <code>select</code> function to wait for activity on the file descriptors. It monitors the file descriptors specified in the <code>readfds</code> set and blocks until any of them become available for reading. The <code>max_fd + 1</code> parameter ensures that <code>select</code> checks all file descriptors from 0 to <code>max_fd</code>. If the <code>select</code> call returns a negative value (<code>readyfds &lt; 0</code>) and the error is not due to an interrupted system call (<code>errno != EINTR</code>), an error message is displayed.</p><pre><code class="language-c">readyfds = select(max_fd + 1, &amp;readfds, NULL, NULL, NULL);

if ((readyfds &lt; 0) &amp;&amp; (errno!=EINTR)) {
  printf(&quot;select error&quot;);
}</code></pre><p>We check if the master socket is part of the <code>readfds</code> set using <code>FD_ISSET</code>. If so, it means a new client is attempting to connect. We accept the connection using the <code>accept</code> function and store the new socket descriptor in the <code>connfds</code> array. We also print the client&apos;s IP address using <code>inet_ntoa</code> and increment the <code>activeconnections</code> counter.</p><pre><code class="language-c">if(FD_ISSET(mastersockfd, &amp;readfds)) {
  if((connfds[activeconnections] = accept(mastersockfd, (struct sockaddr *) &amp;clientIPs[activeconnections], (socklen_t *) &amp;addrlen)) &lt; 0) {
    perror(&quot;accept error...&quot;);
    exit(1);
}

  fprintf(stdout, &quot;New connection from %s\n&quot;,  inet_ntoa(clientIPs[activeconnections].sin_addr));
  activeconnections++;
}</code></pre><p>For each active client socket, we check if it is part of the <code>readfds</code> set using <code>FD_ISSET</code>. If the socket is ready for reading, we clear the input and output buffers using <code>memset</code>. We read data from the client using the <code>read</code> function. If the return value is 0, it means the connection was closed normally, and if it is -1, an error occurred. We handle these cases by printing an error message, marking the connection as closed, and closing the socket. We then continue to the next iteration of the loop.</p><p>If the read operation is successful, we retrieve the client&apos;s IP address from <code>clientIPs</code> using <code>inet_ntoa</code> and store it in the output buffer. We also print the client&apos;s IP address and the received message to the console. We concatenate the client&apos;s IP address and the message into the output buffer. Then, we iterate through all active connections and write the message to each client except the sender, using the <code>write</code> function.</p><pre><code class="language-c">for(int i=0;i &lt; activeconnections; i++) {
  // check if connection is active and it is ready to read
  if(connfds[i] != 0 &amp;&amp; FD_ISSET(connfds[i], &amp;readfds)) {
    // clear buffer
    memset(inBuffer[i], 0, 1024);
    memset(outBuffer[i], 0, 1024);

    // read returns 0 if connection closed normally
    // and -1 if error
    if(read(connfds[i], inBuffer[i], 1024) &lt;= 0) {
      fprintf(stderr, &quot;%s (code: %d)\n&quot;, strerror(errno), errno);
      strncpy(outBuffer[i], inet_ntoa(clientIPs[i].sin_addr), INET_ADDRSTRLEN);
      fprintf(stderr, &quot;Host %s disconnected\n&quot;, outBuffer[i]);
      close(connfds[i]);
      connfds[i] = 0;
      continue;
    }

    // get client ip
    strncpy(outBuffer[i], inet_ntoa(clientIPs[i].sin_addr), INET_ADDRSTRLEN);

    fprintf(stdout, &quot;%s: %s&quot;, outBuffer[i], inBuffer[i]);

    strcat(outBuffer[i], &quot; : &quot;);
    strcat(outBuffer[i], inBuffer[i]);

    for(int j=0;j&lt;activeconnections;j++) {
       if(connfds[j] != 0 &amp;&amp; i != j) {
         write(connfds[j], outBuffer[i], strlen(outBuffer[i]));
       }
    }
  }
}</code></pre><h2 id="setting-up-client">Setting up client</h2><p>Firstly, we define a function called <code>readline</code> that reads input from the user until a specified end-of-character is encountered. This function will be used to read input from the user and send it to the server.</p><pre><code class="language-c">int readline(char *buffer, int maxchars, char eoc) {
  int n = 0;
  while(n &lt; maxchars) {
    buffer[n] = getc(stdin);
    if(buffer[n] == eoc)
      break;
    n++;
  }
  return n;
}</code></pre><p>Moving on to the main function, we declare some variables and initialize them. <code>sockfd</code> represents the socket file descriptor, <code>serv_addr</code> is a structure that holds the server&apos;s address information, and <code>sendline</code> and <code>recvline</code> are character arrays to store the messages to be sent and received, respectively.</p><pre><code class="language-c">int sockfd;
struct sockaddr_in serv_addr;
char sendline[1024], recvline[1024];</code></pre><p>We set up the server address by assigning the address family (<code>AF_INET</code>) and the port number (<code>8081</code>) to the <code>serv_addr</code> structure. Additionally, we convert the server IP address from a string format to binary using the <code>inet_pton</code> function. If the conversion fails, an error message is displayed, and the program exits.</p><pre><code class="language-c">serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(8081);

if(inet_pton(AF_INET, &quot;127.0.0.1&quot;, &amp;serv_addr.sin_addr)&lt;=0) { 
  perror(&quot;addre&quot;);
  exit(-1);
} </code></pre><p>To establish a connection with the server, we use the <code>connect</code> function. It takes the socket file descriptor (<code>sockfd</code>), the server address structure, and its size as arguments. If the connection fails, an error message is printed, and the program exits.</p><pre><code class="language-c">if(connect(sockfd, (struct sockaddr *) &amp;serv_addr, sizeof serv_addr) &lt; 0) {
  perror(&quot;connect error...&quot;);
  exit(1);
}</code></pre><p>We set up a loop that continuously listens for input from the user and messages from the server. The <code>fd_set</code> data structure is used to monitor the file descriptors for activity. In our case, we monitor the user input (<code>0</code>) and the socket (<code>sockfd</code>) for readability.</p><p>Within the loop, we call <code>select</code> to check for any active file descriptors. If <code>select</code> returns a value less than <code>0</code> and the error is not due to interruption (<code>EINTR</code>), an error message is printed.</p><p>If the user input (<code>stdin</code>) is ready for reading, we call the <code>readline</code> function to read the input from the user and store it in the <code>sendline</code> buffer. Then, we use the <code>write</code> function to send the input to the server.</p><p>If the socket is ready for reading, we use the <code>read</code> function to receive data from the server and store it in the <code>recvline</code> buffer. Finally, we print the received data to the standard output using <code>fprintf</code>.</p><pre><code class="language-c">fd_set waitfds;
int readyfds;
while(1) {
  FD_ZERO(&amp;waitfds);

  // add mastersockfd
  FD_SET(sockfd, &amp;waitfds);
  FD_SET(0, &amp;waitfds);

  memset(recvline, 0, 1024);
  memset(sendline, 0, 1024);

  // connfd will be always largest
  readyfds = select(sockfd + 1, &amp;waitfds, NULL, NULL, NULL);
  if ((readyfds &lt; 0) &amp;&amp; (errno!=EINTR)) {
    printf(&quot;select error&quot;);
  }

  // if stdin ready, read it and send
  if(FD_ISSET(0, &amp;waitfds)) {
    readline(sendline, 1024, &apos;\n&apos;);
    write(sockfd, sendline, strlen(sendline));
  }
    
  // if socket ready, read it and print
  if(FD_ISSET(sockfd, &amp;waitfds)) {
    read(sockfd, recvline, 1024);
    fprintf(stdout, &quot;%s&quot;, recvline);
  }
}</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2023/06/image-2.png" class="kg-image" alt="Networking Project in C - Chat Room" loading="lazy" width="500" height="500"></figure>]]></content:encoded></item><item><title><![CDATA[Stock Market Jargon Explained - Part 1]]></title><description><![CDATA[<p>When I embarked on my journey of learning and investing in the stock market, I encountered a multitude of terms and abbreviations that often left me feeling confused. One such set of terms includes the PE ratio, PB ratio, and more. As I continue to learn, I am writing this</p>]]></description><link>https://avabodha.in/stock-market-jargon-explained-part-1/</link><guid isPermaLink="false">649aad8b85e1c5050d8a38b7</guid><category><![CDATA[Share-Market]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Tue, 27 Jun 2023 10:55:03 GMT</pubDate><media:content url="https://avabodha.in/content/images/2023/06/image_1360542647.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2023/06/image_1360542647.png" alt="Stock Market Jargon Explained - Part 1"><p>When I embarked on my journey of learning and investing in the stock market, I encountered a multitude of terms and abbreviations that often left me feeling confused. One such set of terms includes the PE ratio, PB ratio, and more. As I continue to learn, I am writing this blog not only for my future reference but also to assist individuals who may be struggling with these concepts.</p><h3 id="total-revenue">Total Revenue</h3><p>Total revenue refers to the overall amount of income or earnings generated by a company during a specific period of time. An increasing total revenue generally indicates that the company is expanding or experiencing growth.</p><h3 id="ebitda-earnings-before-interest-taxes-depreciation-and-amortization">EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization)</h3><p>EBITDA represents the total revenue after deducting various expenses, such as the cost of raw materials, power/fuel costs, employee salaries, and selling and administrative expenses. EBITDA is considered a valuable measure as it provides a clearer view of a company&apos;s operational performance by eliminating the effects of financing decisions, accounting methods, and tax regulations.</p><h3 id="pbitebit-profitearnings-before-interest-taxes">PBIT/EBIT (Profit/Earnings Before Interest &amp; Taxes)</h3><p>This metric encompasses EBITDA while also accounting for depreciation and amortization. In simple words, it is EBITDA when depreciation and amortization are also considered.</p><h3 id="pbt-profit-before-taxes">PBT (Profit Before Taxes)</h3><p>PBT is an important measure as it helps assess the profitability of a company&apos;s core operations before accounting for tax obligations. It provides insights into the financial performance and operating efficiency of the company.</p><h3 id="net-income">Net Income</h3><p>Pure profit. Net income, also known as net profit or net earnings, is a financial metric that represents the total profit generated by a company after deducting all expenses, including taxes, interest, and non-operating items, from its total revenue or sales. It provides a comprehensive picture of the company&apos;s financial performance.</p><h3 id="eps-earnings-per-share">EPS (Earnings per share)</h3><p>EPS is calculated by dividing the company&apos;s net income by the total number of outstanding shares. It indicates the earnings generated per share.</p><h3 id="dps-dividend-per-share">DPS (Dividend Per Share)</h3><p>It is a financial metric that represents the amount of cash a company distributes to its shareholders for each share they own as dividends. This metric can be misleading because suppose Company 1 has a current share price of 1000 and gives DPS 10 and Company 2 with a share price of 100 gives DPS 10. Both companies are paying DPS as 1 but as an investor, I may prefer Company 2. <strong>That&apos;s Why I prefer dividend yield.</strong></p><h3 id="payout-ratio-total-dividendnet-income">Payout Ratio (Total Dividend/Net Income)</h3><p>The payout ratio is calculated by dividing the total dividends paid by the company by its earnings. The payout ratio is expressed as a percentage. For example, a payout ratio of 50% means that 50% of the company&apos;s net income is being paid out as dividends, while the remaining 50% is retained by the company for reinvestment or other purposes.</p><h3 id="dividend-yield">Dividend Yield</h3><p>The dividend yield is a financial ratio that measures the return on investment (ROI) from dividends received by shareholders relative to the market price of a company&apos;s stock. It indicates the percentage of return an investor can expect to receive in the form of dividends based on the current stock price.</p><p>In the same example from DPS, Company 1 has a dividend yield of 1% while Company 2 has a dividend yield of 10%.</p><h3 id="pe-ratio-price-to-earnings-ratio">PE Ratio (Price to Earnings Ratio)</h3><p>Price to earnings ratio. It is calculated by dividing the market price per share of a company&apos;s stock by its earnings per share (EPS).</p><p>The P/E ratio provides insights into how much investors are willing to pay for each rupee of earnings generated by a company. It is often used as a valuation tool to compare the price of a stock to its earnings potential.</p><h3 id="pb-ratio-price-to-book-ratio">PB Ratio (Price to Book Ratio)</h3><p>It is used to evaluate the market value of a company relative to its book value. It compares the current market price per share of a company&apos;s stock to its book value per share.</p><p>It&apos;s important to note that the interpretation of the PB ratio can vary across industries. Some industries, such as technology or high-growth sectors, tend to have higher PB ratios due to their intangible assets and growth prospects. Other industries, such as utilities or mature industries, may have lower PB ratios due to their tangible asset base and stable cash flows.</p>]]></content:encoded></item><item><title><![CDATA[Tried & Tested Way to Find Multibagger Stock]]></title><description><![CDATA[<p>The word multi-bagger is common in the stock market. Knowing its meaning, everyone goes in search of multi-bagger shares. So first let us explain the meaning o multi-bagger,</p><h3 id="what-is-multi-bagger-stock">What is multi-bagger stock?</h3><p>A multi-bagger stock is a stock whose price has multiplied many times its purchase price. If the price</p>]]></description><link>https://avabodha.in/tried-tested-way-to-find-multibagger-stock/</link><guid isPermaLink="false">623ff0d78094630566973618</guid><category><![CDATA[Share-Market]]></category><category><![CDATA[Stock]]></category><dc:creator><![CDATA[Nightking]]></dc:creator><pubDate>Mon, 22 May 2023 04:10:03 GMT</pubDate><media:content url="https://avabodha.in/content/images/2023/05/image.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2023/05/image.jpg" alt="Tried &amp; Tested Way to Find Multibagger Stock"><p>The word multi-bagger is common in the stock market. Knowing its meaning, everyone goes in search of multi-bagger shares. So first let us explain the meaning o multi-bagger,</p><h3 id="what-is-multi-bagger-stock">What is multi-bagger stock?</h3><p>A multi-bagger stock is a stock whose price has multiplied many times its purchase price. If the price doubled, it became two baggers, if the price increased twenty times, then it became twenty baggers, and if it increased 50 times, then it became fifty baggers. </p><p>Finding a stock that will multiply in the future is like finding a multi-bagger! Finding such a future multi-bagger means studying the stock that has multiplied many times in the past and buying shares by finding similarities between all the previous multi-baggers!</p><h3 id="how-to-find-multi-bagger-stock">How to find multi-bagger stock?</h3><p>Many studies investors have discovered a number of methods using this technique. One of them is the world-famous CANSLIM method invented by William O&apos;Neill. Let us now understand this method -</p><p>William O&apos;Neill studied multi-bagger stocks in the United States from 1953 to 1985 and discovered seven similarities. The word CANSLIM was made by taking the initials of these seven things to remember.</p><h3 id="canslim-in-detail">CANSLIM in detail</h3><p><strong>C = Current quarterly earnings per share</strong>. The company&apos;s quarterly profit should have increased by 20%. Each company publishes its accounts every three months. This year&apos;s profit should have increased by at least 20% over the same quarter last year. Eg. March 2017 profit should be at least 20% higher than March 2016 profit.</p><p><strong>A = Annual earnings per share.</strong> The annual profit of the company should have been increasing a lot every year for the last five years.</p><p><strong>N = New things (product, management, price). </strong>The company may have launched a new product or the company must have new management or the shares of the company should have recorded a new initial high of the price in the stock market.</p><p><strong>S = Shares outstanding.</strong> The fewer shares available in the stock market for buying and selling, the fewer the number. If demand increases then supply should be less.</p><p><strong>L = Leaders.</strong> Choose the leading company in that area/industry.</p><p><strong>I = Institutional Ownership.</strong> The shares of this company should be invested in three to ten large institutions such as Mutual Funds, Insurance Companies, Pension Funds, and Foreign Institutional Investors. Such institutions study in depth before investing.</p><p><strong>M = Market Direction. </strong>buy when the stock market is booming. </p><p>If any stock fulfilling all the above points, then we can say it is a possible multi-bagger stock.</p>]]></content:encoded></item><item><title><![CDATA[A Beginner's Guide to Hook Mechanism in Go]]></title><description><![CDATA[<h2 id="what-are-hooks">What are Hooks?</h2><p>Hooks are a way to extend the functionality of an application by allowing developers to &quot;hook&quot; into certain events or actions that occur during the application&apos;s lifecycle. Hooks can be used to modify the behavior of an application, add new features, or perform</p>]]></description><link>https://avabodha.in/a-beginners-guide-to-hook-mechanism-in-go/</link><guid isPermaLink="false">642ba60085e1c5050d8a3804</guid><category><![CDATA[Go programming language]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Tue, 04 Apr 2023 04:47:36 GMT</pubDate><media:content url="https://avabodha.in/content/images/2023/04/hook.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="what-are-hooks">What are Hooks?</h2><img src="https://avabodha.in/content/images/2023/04/hook.jpg" alt="A Beginner&apos;s Guide to Hook Mechanism in Go"><p>Hooks are a way to extend the functionality of an application by allowing developers to &quot;hook&quot; into certain events or actions that occur during the application&apos;s lifecycle. Hooks can be used to modify the behavior of an application, add new features, or perform custom actions.</p><p>For example, imagine you have an application that performs some processing when a user logs in. You could use a hook to add some additional processing when the user logs in, such as sending a welcome email or logging the login event.</p><p>Hooks are widely used in various systems, including web applications, operating systems, and programming languages. In this blog post, we will be exploring the hook mechanism and how to implement it in the go programming language.</p><h2 id="hook-mechanism">Hook Mechanism</h2><p>The hook mechanism consists of three primary components: <strong>the hook interface, the hook register function, and the hook implementation.</strong> The hook interface defines the operations or blueprint for the hooks. All hooks should implement these operations. The hook register function is used to register custom hooks with the program. The hook implementation contains the custom code to be executed during the hook.</p><h2 id="implementing-hooks-in-go">Implementing Hooks in Go</h2><p>To implement hooks in Go, we&apos;ll first define an interface that hooks must implement. This interface will define the <code>Init()</code>, <code>Perform()</code>, and <code>Destroy()</code> methods that hooks must implement. Here is an example of what this interface might look like:</p><pre><code class="language-go">type Hook interface {
  Init()
  Perform()
  Destroy()
}
</code></pre><p>Next, we&apos;ll define a registration function that hooks can use to register themselves with the application. This function will take an instance of the hook interface and add it to a list of registered hooks. Here is an example of what this function might look like:</p><pre><code class="language-go">var hooks []Hook

func Register(h Hook) {
  hooks = append(hooks, h)
}</code></pre><p>Now that we have our interface and registration function defined, we can start implementing our hooks. Let&apos;s take a look at an example of a simple hook:</p><pre><code class="language-go">//go:build hook_1
// +build hook_1

package hooks

import (
  &quot;fmt&quot;
)

type MyHook1 struct {
  name string
}

func (m *MyHook1) Init() {
  fmt.Println(&quot;Initializing Hook 1&quot;)
}

func (m *MyHook1) Perform() {
  fmt.Println(&quot;Performing Hook 1&quot;)
}

func (m *MyHook1) Destroy() {
  fmt.Println(&quot;Destroying Hook 1&quot;)
}

func init() {
  fmt.Println(&quot;Registering hook 1&quot;)
  Register(&amp;MyHook1{name: &quot;MyHook1&quot;})
}</code></pre><p>Finally, we modify the main function of our program to get all the registered hooks and call their <code>Init()</code>, <code>Perform()</code>, and <code>Destroy()</code> functions.</p><pre><code class="language-go">func main() {
  hooks := hooks.GetHooks()

  fmt.Println(&quot;Got hooks&quot;, len(hooks))

  for _, v := range hooks {
    v.Init()
  }

  for _, v := range hooks {
    v.Perform()
  }

  for _, v := range hooks {
    v.Destroy()
  }
}</code></pre><p>The program first retrieves all the registered hooks and calls their <code>Init()</code> function to initialize the hook-specific data. Then, the program calls the <code>Perform()</code> function of each hook, which performs the hook&apos;s functionality. Finally, the program calls the <code>Destroy()</code> function of each hook to clean up the hook.</p><p>Regarding the build comments ( <code>//go:build hook_1</code> or <code>// +build hook_1</code> ), you can use them to control which hooks are included in the built binary. For example, if you want to include only the first hook in the binary, you can use the following command to build the application: </p><pre><code class="language-sh">go build -tags hook_1</code></pre><p>This command will include only the files that match the build tag <code>hook_1</code>, which in this case will be our above hook. If you want to include all hooks, you can omit the build tag:</p><pre><code class="language-sh">go build</code></pre><p>In conclusion, the hook mechanism in Go allows developers to add hooks to their code, which can be executed at specific times during program execution. This mechanism provides developers with flexibility and the ability to add custom snippets. The code snippets and explanations in this blog post should help you get started with implementing hooks in your own Go programs.</p><p>If you want to see the complete code with all the hooks, you can check out the GitHub repository here: <a href="https://github.com/lets-learn-it/go-learning/tree/master/21-packages-and-modules/04-hook-mechanism">https://github.com/lets-learn-it/go-learning/tree/master/21-packages-and-modules/04-hook-mechanism</a>. The repository includes a main.go file, a hooks folder containing various hooks, and a register.go file, which is a helper for registering the hooks.</p>]]></content:encoded></item><item><title><![CDATA[Introduction to Apache Kafka: Theory]]></title><description><![CDATA[Kafka is a pub-sub distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol.]]></description><link>https://avabodha.in/introduction-to-apache-kafka-theory/</link><guid isPermaLink="false">63301048ae2a73052e46970c</guid><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sun, 25 Sep 2022 10:06:51 GMT</pubDate><media:content url="https://avabodha.in/content/images/2022/09/kafka.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2022/09/kafka.png" alt="Introduction to Apache Kafka: Theory"><p>If you are ready to read a lot of web pages then please go to <a href="https://kafka.apache.org/documentation/">apache Kafka documentation</a> because that will make sense And if you are comfortable watching video lectures then watching <a href="https://www.linkedin.com/learning/learn-apache-kafka-for-beginners/">learn-apache-Kafka-for-beginners</a> instead of reading this blog will be cool. But if you want to know what is apache Kafka quickly, then stay with me.</p><p>Kafka is a pub-sub distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. You can write a lot of messages/events to Kafka and read those at your convenience.</p><h2 id="terminologies">Terminologies</h2><h3 id="producers-consumers">Producers &amp; Consumers</h3><p>producers/publishers are clients who write data/events to Kafka, while consumers/subscribers are clients who read data/events from Kafka. Both consumers and producers may or may not have knowledge of each other&apos;s existence. Both are fully decoupled.</p><p>When producers write data to Kafka topic, that <strong>data is assigned to partitions randomly unless the key is provided.</strong> All events with the same key will be assigned to the same partition.</p><h3 id="brokers">Brokers</h3><p>Brokers are machines on which apache Kafka is running and providing its services both to producers and consumers. The system can have multiple brokers (and that is recommended for high availability and fault tolerance) and each broker can be identified by <strong>id</strong>. Each broker can have one or more topic partitions and those <strong>partitions will be replicated across multiple brokers</strong>.</p><p>If the client connects to any broker (called a bootstrap broker), the client will be connected to the entire cluster.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/09/brokers.png" class="kg-image" alt="Introduction to Apache Kafka: Theory" loading="lazy" width="641" height="391" srcset="https://avabodha.in/content/images/size/w600/2022/09/brokers.png 600w, https://avabodha.in/content/images/2022/09/brokers.png 641w"></figure><h3 id="topics-partitions-offsets">Topics, Partitions &amp; Offsets</h3><p>Topics are like tables in a database or virtual hosts in Rabbit MQ. Events are organized and durably stored in topics. Topics can have multiple producers writing events and at the same time, multiple consumers are consuming those events.</p><p>Topics are split into partitions like 0,1, 2, 3, ... etc And each message within the partition gets an incremental id called offset. And these offsets have meaning only in that specific partition. </p><p>Each topic has a replication factor that states <strong>how many times every partition in the topic should be replicated across different brokers</strong>. And it should be &gt; <code>1</code>. If one broker goes down, another broker serves data.</p><p>As each partition is replicated across brokers, then <strong>who will be the owner of that partition?</strong> Here comes the leader for partition, at any time only ONE broker can be the leader for a given partition. Only that leader can receive and serve data for that partition and other brokers will synchronize data. </p><p>Suppose in the below image broker with id <code>101</code> goes down, then also partition <code>0</code> of topic A is already present on the broker with id <code>102</code>. So broker with id <code>102</code> will be the new leader for the partition <code>0</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://avabodha.in/content/images/2022/09/replication.png" class="kg-image" alt="Introduction to Apache Kafka: Theory" loading="lazy" width="601" height="351" srcset="https://avabodha.in/content/images/size/w600/2022/09/replication.png 600w, https://avabodha.in/content/images/2022/09/replication.png 601w"><figcaption>Replication of partitions across brokers in Apache Kafka</figcaption></figure><p>Once data is written to a partition, <strong>it can&apos;t be changed</strong> (immutability)</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://avabodha.in/content/images/2022/09/image.png" class="kg-image" alt="Introduction to Apache Kafka: Theory" loading="lazy" width="1456" height="754" srcset="https://avabodha.in/content/images/size/w600/2022/09/image.png 600w, https://avabodha.in/content/images/size/w1000/2022/09/image.png 1000w, https://avabodha.in/content/images/2022/09/image.png 1456w" sizes="(min-width: 720px) 720px"><figcaption>https://kafka.apache.org/documentation/</figcaption></figure><p>As you can see in the above image, there is a topic that has 4 partitions.</p><h2 id="producers-and-message-keys">Producers and Message Keys</h2><p>Producers write data to topics (which are made of partitions). Producers automatically know to which broker &amp; partition to write and in case of broker failure, producers will automatically recover.</p><p>Producers can choose to receive acknowledgment of data writes. default is <code>acks=1</code></p><!--kg-card-begin: markdown--><ul>
<li><strong>acks=0</strong>: no ack (possible data loss)</li>
<li><strong>acks=1</strong>: ack from leader only (limited loss)</li>
<li><strong>acks=2</strong>: leader + replicas (no data loss)</li>
</ul>
<!--kg-card-end: markdown--><h3 id="message-keys">Message Keys</h3><p>Producers can send keys with messages (string, number, etc). <strong>When the key is null and the default partitioner is used, the record will be sent to one of the available partitions of the topic at random</strong>. A round-robin algorithm will be used to balance the messages among the partitions.</p><p><strong>if the key is sent then all messages with the same key will go to the same partition.</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://avabodha.in/content/images/2022/09/message-keys-2.png" class="kg-image" alt="Introduction to Apache Kafka: Theory" loading="lazy" width="963" height="403" srcset="https://avabodha.in/content/images/size/w600/2022/09/message-keys-2.png 600w, https://avabodha.in/content/images/2022/09/message-keys-2.png 963w"></figure><h2 id="consumers-and-consumer-groups">Consumers and Consumer Groups</h2><p>Consumers read data from the topic (identified by name). Consumers know which broker (leader) to read from and if a broker fails, how to recover. <strong>Data is read in order within each partition. </strong>The same message can be read multiple times if the consumer wants to.</p><h3 id="consumer-groups">Consumer Groups</h3><p>Consumers can read data in consumer groups. <strong>Each consumer within the group reads from exclusive partitions.</strong> if the group has more consumers than partitions, then some consumers will be inactive and if the group has fewer consumers than partitions, then some consumers will read from multiple partitions.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://avabodha.in/content/images/2022/09/consumer-groups.png" class="kg-image" alt="Introduction to Apache Kafka: Theory" loading="lazy" width="1111" height="413" srcset="https://avabodha.in/content/images/size/w600/2022/09/consumer-groups.png 600w, https://avabodha.in/content/images/size/w1000/2022/09/consumer-groups.png 1000w, https://avabodha.in/content/images/2022/09/consumer-groups.png 1111w"></figure><h3 id="consumer-offset">Consumer Offset</h3><p>Kafka stores the offsets at which a consumer group has been reading. It tells the consumer where it was the last time in the partition. &amp; which is the next message to read? Offsets are committed live in a Kafka topic <code>__consume_offsets</code>.</p><p>When the consumer in the group has processed data received from Kafka, it should be committing the offsets. If consumers die, it will be able to read back from where it left off (thanks to committed consumer offsets).</p><h2 id="apache-kafka-guarantees">Apache Kafka Guarantees</h2><!--kg-card-begin: markdown--><ul>
<li>Messages are appended to a topic partition in the order they are sent</li>
<li>Consumers read messages in order stored in the topic partition.</li>
<li>With a replication factor of <strong>N</strong>, producer &amp; consumer can tolerate up to <strong>N-1</strong> brokers being down.</li>
<li>As long as the number of partitions remains constant for a topic (no new partitions), the message with the same key will always go to the same partition.</li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Scale Kubernetes pods based on Azure Service Bus Queue using Keda]]></title><description><![CDATA[KEDA can be used to autoscale pods by using different predefined scalers. In this post, we will learn with Azure service bus queue scaler.]]></description><link>https://avabodha.in/scale-kubernetes-pods-based-on-azure-service-bus-queue-using-keda/</link><guid isPermaLink="false">622b072e8094630566973502</guid><category><![CDATA[azure]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Fri, 11 Mar 2022 13:49:36 GMT</pubDate><media:content url="https://avabodha.in/content/images/2022/03/main.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2022/03/main.png" alt="Scale Kubernetes pods based on Azure Service Bus Queue using Keda"><p>In this blog post, we will use KEDA (Kubernetes Event-driven Autoscaling) for autoscaling pod count based on Azure Service Bus Queue length. I will be using the local Kubernetes cluster but it should work with Azure Kubernetes Service (AKS) also. All code used in this blog post can be found <a href="https://github.com/lets-learn-it/keda-examples/tree/master/azure-service-bus-queue">https://github.com/lets-learn-it/keda-examples/tree/master/azure-service-bus-queue</a>.</p><!--kg-card-begin: markdown--><p>Plan of Action as below,</p>
<ol>
<li>Deploy KEDA in Kubernetes cluster</li>
<li>Create Azure Service Bus Queue using Terraform</li>
<li>Writing Kubernetes configuration files</li>
<li>Testing</li>
</ol>
<!--kg-card-end: markdown--><h2 id="deploy-keda">Deploy KEDA</h2><p>KEDA can be installed in the Kubernetes cluster using Helm. More info <a href="https://keda.sh/docs/2.6/deploy/">https://keda.sh/docs/2.6/deploy/</a>. &#xA0;I deployed KEDA 2.6 in Kubernetes 1.22.x.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://avabodha.in/content/images/2022/03/install_keda.PNG" class="kg-image" alt="Scale Kubernetes pods based on Azure Service Bus Queue using Keda" loading="lazy" width="1549" height="684" srcset="https://avabodha.in/content/images/size/w600/2022/03/install_keda.PNG 600w, https://avabodha.in/content/images/size/w1000/2022/03/install_keda.PNG 1000w, https://avabodha.in/content/images/2022/03/install_keda.PNG 1549w"></figure><h2 id="azure-service-bus-queue">Azure Service Bus Queue</h2><p>We need to create a resource group in order to create a service bus namespace &amp; queue inside that namespace. Below is terraform code to achieve that.</p><pre><code class="language-hcl">resource &quot;azurerm_resource_group&quot; &quot;kedaq-rg&quot; {
  name     = &quot;keda-demo-rg&quot;
  location = &quot;West Europe&quot;
}

resource &quot;azurerm_servicebus_namespace&quot; &quot;keda-namespace&quot; {
  name                = var.servicebus-namespace-name
  location            = azurerm_resource_group.kedaq-rg.location
  resource_group_name = azurerm_resource_group.kedaq-rg.name
  sku                 = &quot;Standard&quot;

}

resource &quot;azurerm_servicebus_queue&quot; &quot;keda-demoq&quot; {
  name                = var.servicebus-queue-name
  namespace_name      = azurerm_servicebus_namespace.keda-namespace.name
  resource_group_name = azurerm_resource_group.kedaq-rg.name

  enable_partitioning = true
}</code></pre><p>For the service bus namespace, we can use the default shared access policy but for the queue, we need to create one access policy. &amp; make sure to create <code>manage</code> access policy (as per KEDA docs). </p><pre><code class="language-hcl">resource &quot;azurerm_servicebus_queue_authorization_rule&quot; &quot;queuerule&quot; {
  name                = &quot;queuerule&quot;
  namespace_name      = azurerm_servicebus_namespace.keda-namespace.name
  queue_name          = azurerm_servicebus_queue.keda-demoq.name
  resource_group_name = azurerm_resource_group.kedaq-rg.name

  # As per KEDA docs,
  # Service Bus Shared Access Policy needs to be of type Manage.
  # Manage access is required for KEDA to be able to get metrics from Service Bus.
  manage = true
  listen = true
  send   = true
}</code></pre><p>make sure to declare variables used in the above configuration &amp; also need to do output some values i.e. connection strings.</p><pre><code class="language-hcl">output &quot;namespace_primary_connection_string&quot; {
    value     = azurerm_servicebus_namespace.keda-namespace.default_primary_connection_string
    sensitive = true
}

output &quot;queue_primary_connection_string&quot; {
    value     = azurerm_servicebus_queue_authorization_rule.queuerule.primary_connection_string
    sensitive = true
}
</code></pre><h4 id="create-resources">Create resources</h4><p>Run plan &amp; apply to create resources in Azure. After running terraform, I got 1 Azure service bus namespace &amp; one queue inside it.</p><h2 id="kubernetes-configuration-files">Kubernetes configuration files</h2><p>I will deploy Nginx as a deployment which will do nothing. We will add messages &amp; remove messages from the queue manually to test scaling.</p><pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app
spec:
  selector:
    matchLabels:
      app: demo-app
  template:
    metadata:
      labels:
        app: demo-app
    spec:
      containers:
      - name: demo-app
        image: nginx
        resources:
          limits:
            memory: &quot;128Mi&quot;
            cpu: &quot;100m&quot;
        ports:
        - containerPort: 8080
        env:
          # This is not required when using triggerauthentication object
          - name: keda-secret
            valueFrom: 
              secretKeyRef:
                name: queue-policy-secret
                key: connection-string</code></pre><p>Everything in the above YAML is normal except the environment variable <code>keda-secret</code>. We can authorize KEDA <code>ScaledObject</code> to access our queue in multiple ways. <strong>One of the ways needs this secret.</strong></p><p>We will see 2 ways to authorize KEDA to access the queue. There is another way of using <strong>identity </strong>but for simplicity, we will not see this way in this post. </p><h4 id="way-01-use-triggerauthentication">Way 01 Use TriggerAuthentication</h4><p>First create secret using default primary connection string of Azure service bus <strong>namespace</strong>, like below </p><pre><code class="language-yaml">apiVersion: v1
kind: Secret
metadata:
  name: namespace-secret
type: Opaque
data:
  # This is azure service bus namespace connection string
  connection: RW5kcG9pbnQ9c2I6Ly9rZWRhLXNlcnZpY2VidXMtbmFtZXNwYWNlLnNlcnZpY2VidXMud2luZG93cy5uZXQvO1NoYXJlZEFjY2Vzc0tleU5hbWU9Um9vdE1hbmFnZVNoYXJlZEFjY2Vzc0tleTtTaGFyZWRBY2Nlc3NLZXk9bGQ4akIyV042SWRlbzJkMWJobXpYR01SZWRIOXg5ZloremFxSGtmVUQrcz0=
</code></pre><p>Now, once we have a secret, we can create <code>TriggerAuthentication</code> for that secret.</p><pre><code class="language-yaml">apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
  name: azure-servicebus-auth
spec:
  secretTargetRef:
    - key: connection
      name: namespace-secret # name of secret
      parameter: connection # key in secret
</code></pre><p>The remaining piece in the puzzle is <code>ScaledObject</code>. Which we can create using below YAML,</p><pre><code class="language-yaml">apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: azure-servicebus-queue-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    kind: Deployment
    name: demo-app
  pollingInterval: 30
  cooldownPeriod: 60
  minReplicaCount: 1
  maxReplicaCount: 4

  triggers:
  - type: azure-servicebus
    metadata:
      queueName: keda-demoq
      messageCount: &quot;5&quot;
    authenticationRef:
        # reference to TriggerAuthentication
        name: azure-servicebus-auth</code></pre><p>That&apos;s it, It will create HPA &amp; if HPA is not created then check the logs of the operator. Make sure to check status of <code>ScaledObject</code>.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://avabodha.in/content/images/2022/03/way1.PNG" class="kg-image" alt="Scale Kubernetes pods based on Azure Service Bus Queue using Keda" loading="lazy" width="1537" height="656" srcset="https://avabodha.in/content/images/size/w600/2022/03/way1.PNG 600w, https://avabodha.in/content/images/size/w1000/2022/03/way1.PNG 1000w, https://avabodha.in/content/images/2022/03/way1.PNG 1537w"></figure><h5 id="testing">Testing</h5><p>I sent 6 messages to the queue using the service bus explorer. And KEDA up scaled pods to 2. To check to downscale, remove some messages from the queue (make sure to wait for <code>cooldownPeriod</code>).</p><h4 id="way-02-use-pods-environment-variable">Way 02 Use pod&apos;s environment variable</h4><p>We can use the pod&apos;s environment variable for authorization also. I personally discourage using it. In this method, <strong>we need the connection string of the queue as an environment variable of deployment. </strong>But before starting way 02, remove resources from way 01.</p><p>The secret will get changed (not connection string of namespace but of the queue itself. for this reason, we created <code>authorization rule</code> in terraform). And we don&apos;t need <code>TriggeredAuthentication</code>.</p><pre><code class="language-yaml">apiVersion: v1
kind: Secret
metadata:
  name: queue-policy-secret
type: Opaque
data:
  # This is queue connection string
  connection: RW5kcG9pbnQ9c2I6Ly9rZWRhLXNlcnZpY2VidXMtbmFtZXNwYWNlLnNlcnZpY2VidXMud2luZG93cy5uZXQvO1NoYXJlZEFjY2Vzc0tleU5hbWU9cXVldWVydWxlO1NoYXJlZEFjY2Vzc0tleT1pWTNzSlh5OWorL0wvZVhMYmU5RmFseWVDU1pMWHM4WDFQVTZ2dkhzeW1nPTtFbnRpdHlQYXRoPWtlZGEtZGVtb3E=
</code></pre><p>The only change in <code>ScaledObject</code> is <code>connectionFromEnv</code> parameter &amp; removing <code>TriggerAuthentication</code> reference.</p><pre><code class="language-yaml">apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: azure-servicebus-queue-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    kind: Deployment
    name: demo-app
  pollingInterval: 30
  cooldownPeriod: 60
  minReplicaCount: 1
  maxReplicaCount: 4

  triggers:
  - type: azure-servicebus
    metadata:
      queueName: keda-demoq
      messageCount: &quot;5&quot;
      # ENV var of deployment
      connectionFromEnv: keda-secret</code></pre><figure class="kg-card kg-image-card kg-width-full"><img src="https://avabodha.in/content/images/2022/03/way2.PNG" class="kg-image" alt="Scale Kubernetes pods based on Azure Service Bus Queue using Keda" loading="lazy" width="1526" height="717" srcset="https://avabodha.in/content/images/size/w600/2022/03/way2.PNG 600w, https://avabodha.in/content/images/size/w1000/2022/03/way2.PNG 1000w, https://avabodha.in/content/images/2022/03/way2.PNG 1526w"></figure><p>This time, you can&apos;t see anything in front of Authentication but it should show <code>Ready</code> status &amp; it will work.</p><h3 id="cleanup">Cleanup</h3><p>Make sure to clean all resources, especially those you created in Azure. &#xA0;No one wants to pay the cost for unused resources.</p><h3 id="references">References</h3><!--kg-card-begin: markdown--><p><a href="https://keda.sh/docs/2.6/scalers/azure-service-bus/">[1] https://keda.sh/docs/2.6/scalers/azure-service-bus/</a><br>
<a href="https://github.com/kedacore/keda/blob/main/pkg/scalers/azure_servicebus_scaler.go">[2] https://github.com/kedacore/keda/blob/main/pkg/scalers/azure_servicebus_scaler.go</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Make Branching and Condition Evaluation Faster]]></title><description><![CDATA[branching is one of the slowest operations & can be made faster using a bitwise operator. We can completely remove branching with the use of bitwise operators.]]></description><link>https://avabodha.in/make-branching-and-condition-evaluation-faster/</link><guid isPermaLink="false">62242f778094630566973478</guid><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sun, 06 Mar 2022 05:09:19 GMT</pubDate><media:content url="https://avabodha.in/content/images/2022/03/download-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2022/03/download-1.png" alt="Make Branching and Condition Evaluation Faster"><p>Some time back, I read how to make branching faster? I don&apos;t remember where I read it. May be on Reddit. This blog &amp; in the future also, I will try to write similar posts whenever I find new tricks. You can find all code in the GitHub repository.</p><p>In this post, we will see the simplest example of finding odd &amp; even numbers. below code will be used for all possible solutions &amp; only conditions will change.</p><pre><code class="language-cpp">#include&lt;iostream&gt;
using namespace std;

int main() {
    int n = 999999999;

    int even = 0;
    int odd = 0;

    for(int i=0;i&lt;n;i++) {
    // chaning part
        if(i % 2) {
            odd ++;
        } else {
            even ++;
        }
    // changing part ends
    }

    cout &lt;&lt; &quot;Even: &quot; &lt;&lt; even &lt;&lt; endl;
    cout &lt;&lt; &quot;Odd: &quot; &lt;&lt; odd &lt;&lt; endl;

    return 0;
}</code></pre><h3 id="using-hyperfine">Using hyperfine</h3><p>I will be using <code>hyperfine</code> for finding the performance of code. You can check it <a href="https://github.com/sharkdp/hyperfine">https://github.com/sharkdp/hyperfine</a>.</p><h2 id="a-simplest-solution">A. Simplest solution</h2><p>Using <code>if</code> &amp; <code>else</code> with <code>%</code> (remainder operator) is the common solution. </p><pre><code class="language-cpp">if(i % 2) {
    odd ++;
} else {
    even ++;
}</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/03/image.png" class="kg-image" alt="Make Branching and Condition Evaluation Faster" loading="lazy" width="890" height="160" srcset="https://avabodha.in/content/images/size/w600/2022/03/image.png 600w, https://avabodha.in/content/images/2022/03/image.png 890w" sizes="(min-width: 720px) 720px"></figure><p>The simplest solution took an average time of <strong>2.9 seconds.</strong></p><h2 id="b-using-bitwise-instead-of-the-remainder">B. Using bitwise instead of the remainder</h2><p>Let&apos;s try the bitwise operator instead of the remainder operator. Using <code>i &amp; 1</code> instead of <code>i % 2</code> will also work.</p><pre><code class="language-cpp">if(i &amp; 1) {
    odd ++;
} else {
    even ++;
}</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/03/image-1.png" class="kg-image" alt="Make Branching and Condition Evaluation Faster" loading="lazy" width="869" height="166" srcset="https://avabodha.in/content/images/size/w600/2022/03/image-1.png 600w, https://avabodha.in/content/images/2022/03/image-1.png 869w" sizes="(min-width: 720px) 720px"></figure><p>It looks like the bitwise condition took less average time but max time is larger than the previous approach.</p><h2 id="c-remove-branching-completely">C. Remove branching completely</h2><p>It is possible to remove branching completely and use bitwise operations only. In the first condition, we are checking if the number is odd then increment <code>odd</code> counter. We can achieve it using <code>&amp;&amp;</code> operator like <code>(i &amp; 1) &amp;&amp; odd++</code> . <code>odd</code> will be incremented when <code>(i &amp; 1)</code> is <code>true</code>. Similarly, we can achieve <code>else</code> condition also. </p><pre><code class="language-cpp">(i &amp; 1) &amp;&amp; odd++;
(i &amp; 1) || even++;</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/03/image-2.png" class="kg-image" alt="Make Branching and Condition Evaluation Faster" loading="lazy" width="897" height="167" srcset="https://avabodha.in/content/images/size/w600/2022/03/image-2.png 600w, https://avabodha.in/content/images/2022/03/image-2.png 897w" sizes="(min-width: 720px) 720px"></figure><h2 id="d-remove-duplicate-i-1">D. Remove duplicate <code>(i &amp; 1)</code></h2><p>We can replace 2 statements using a complex one-liner like below.</p><pre><code class="language-cpp">((i &amp; 1) &amp;&amp; odd++) || even++;</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/03/image-3.png" class="kg-image" alt="Make Branching and Condition Evaluation Faster" loading="lazy" width="935" height="166" srcset="https://avabodha.in/content/images/size/w600/2022/03/image-3.png 600w, https://avabodha.in/content/images/2022/03/image-3.png 935w" sizes="(min-width: 720px) 720px"></figure><p>And this is worse than the previous one. It took on average <strong>2.681 seconds</strong>.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/03/image-4.png" class="kg-image" alt="Make Branching and Condition Evaluation Faster" loading="lazy" width="944" height="303" srcset="https://avabodha.in/content/images/size/w600/2022/03/image-4.png 600w, https://avabodha.in/content/images/2022/03/image-4.png 944w" sizes="(min-width: 720px) 720px"></figure><hr><h3 id="references">References</h3><!--kg-card-begin: markdown--><p><a href="https://www.youtube.com/watch?v=bVJ-mWWL7cE">[1] Youtube: Branchless Programming: Why &quot;If&quot; is Sloowww... and what we can do about it!</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Accessing Azure Storage Account from VM using System Assigned Identity & Roles]]></title><description><![CDATA[<p>In this blog post, we will start exploring identities &amp; Role-based access control (RBAC) in Azure for accessing different Azure resources from applications. We will create an infrastructure consisting <strong>Azure Storage Account</strong> which will be having <strong>2 blob storage containers.</strong> We will also create one virtual machine which will be</p>]]></description><link>https://avabodha.in/accessing-storage-account-from-vm-using-system-managed-identity-roles/</link><guid isPermaLink="false">61f23b77af34b50574379ae8</guid><category><![CDATA[azure]]></category><category><![CDATA[terraform]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Thu, 27 Jan 2022 07:59:34 GMT</pubDate><media:content url="https://avabodha.in/content/images/2022/01/systemassignedroles.drawio-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2022/01/systemassignedroles.drawio-1.png" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles"><p>In this blog post, we will start exploring identities &amp; Role-based access control (RBAC) in Azure for accessing different Azure resources from applications. We will create an infrastructure consisting <strong>Azure Storage Account</strong> which will be having <strong>2 blob storage containers.</strong> We will also create one virtual machine which will be allowed as a Reader for that storage account and also a Contributor for one of the blob storage containers.</p><h2 id="what-is-system-assigned-identity">What is System Assigned Identity?</h2><p>Azure will give some ID in Azure Active Directory to azure resources which are created by us. This kind of ID will get created at the time of resource created &amp; gets destroyed at the time of resource destruction. only that Azure resources can use this identity to request tokens from Azure AD.</p><p>You can&apos;t share these IDs with other Azure resources. It is meant for only that resource.</p><p>All code used in this post is available at <a href="https://github.com/lets-learn-it/terraform-learning/tree/azure/07-system-assigned-identities">https://github.com/lets-learn-it/terraform-learning/tree/azure/07-system-assigned-identities</a></p><!--kg-card-begin: markdown--><p>Plan of Action</p>
<ol>
<li>Creating resource group, vnet, &amp; 1 public subnet.</li>
<li>A virtual machine in public subnet with SSH access allowed &amp; also system assigned identity.</li>
<li>Storage account with 2 blob storage containers.</li>
<li>Role assignments</li>
<li>Checking access from the virtual machine</li>
</ol>
<!--kg-card-end: markdown--><p>We will create 2 modules for the Virtual machine &amp; Storage account. Below is my folder structure</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/01/image-1.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="390" height="402"></figure><h2 id="creating-rg-vnet-subnet">Creating RG, Vnet &amp; subnet</h2><h3 id="resource-group">Resource Group</h3><p>All resources will be placed in this single resource group.</p><pre><code class="language-hcl">resource &quot;azurerm_resource_group&quot; &quot;example&quot; {
  name     = var.resource_group_name
  location = &quot;East US&quot;
}</code></pre><h3 id="vnet-public-subnet">Vnet &amp; public subnet</h3><p>We need one public subnet for virtual machines &amp; make sure to specify a service endpoint for storage.</p><pre><code class="language-hcl">resource &quot;azurerm_virtual_network&quot; &quot;example&quot; {
  name                = &quot;example-network&quot;
  address_space       = [&quot;10.0.0.0/16&quot;]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource &quot;azurerm_subnet&quot; &quot;public_subnet&quot; {
  name                 = &quot;public_subnet&quot;
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = [&quot;10.0.1.0/24&quot;]
  service_endpoints    = [&quot;Microsoft.Storage&quot;]
}</code></pre><h2 id="creating-virtual-machine-module">Creating Virtual Machine (Module)</h2><h3 id="creating-public-ip">Creating Public IP</h3><p>We need public IP so that we can SSH into this machine.</p><pre><code class="language-hcl">resource &quot;azurerm_public_ip&quot; &quot;public_ip&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;ip&quot;)
  resource_group_name = var.resource_group_name
  location            = var.location
  allocation_method   = &quot;Dynamic&quot;
}
</code></pre><h3 id="network-interface">Network Interface</h3><pre><code class="language-hcl">resource &quot;azurerm_network_interface&quot; &quot;example&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;network_interface&quot;)
  location            = var.location
  resource_group_name = var.resource_group_name

  ip_configuration {
    name                          = &quot;internal&quot;
    subnet_id                     = var.subnet_id
    private_ip_address_allocation = &quot;Dynamic&quot;
    public_ip_address_id          = azurerm_public_ip.public_ip.id
  }
}</code></pre><h3 id="network-security-group">Network Security Group</h3><p>While creating NSG, make sure to open port 22 for protocol TCP (for SSH access). Associate network interface (previously created) with NSG.</p><pre><code class="language-hcl">resource &quot;azurerm_network_security_group&quot; &quot;nsg&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;nsg&quot;)
  location            = var.location
  resource_group_name = var.resource_group_name

  security_rule {
    name                       = &quot;allow_ssh_sg&quot;
    priority                   = 100 
    direction                  = &quot;Inbound&quot;
    access                     = &quot;Allow&quot;
    protocol                   = &quot;Tcp&quot;
    source_port_range          = &quot;*&quot;
    destination_port_range     = &quot;22&quot;
    source_address_prefix      = &quot;*&quot;
    destination_address_prefix = &quot;*&quot;
  }

  depends_on = [
    azurerm_network_interface.example
  ]
}

resource &quot;azurerm_network_interface_security_group_association&quot; &quot;association&quot; {
  network_interface_id      = azurerm_network_interface.example.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}</code></pre><h3 id="virtual-machine">Virtual Machine</h3><p>Now create a virtual machine with system assigned identity</p><pre><code class="language-hcl">resource &quot;azurerm_linux_virtual_machine&quot; &quot;example&quot; {
  name                = format(&quot;%s%s&quot;, var.name, &quot;vm&quot;)
  resource_group_name = var.resource_group_name
  location            = var.location
  size                = &quot;Standard_B1s&quot;
  admin_username      = &quot;adminuser&quot;

  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  admin_ssh_key {
    username   = &quot;adminuser&quot;
    public_key = file(var.public_key_path)
  }

  os_disk {
    caching              = &quot;ReadWrite&quot;
    storage_account_type = &quot;Standard_LRS&quot;
  }

  source_image_reference {
    publisher = &quot;Canonical&quot;
    offer     = &quot;UbuntuServer&quot;
    sku       = &quot;16.04-LTS&quot;
    version   = &quot;latest&quot;
  }

  identity {
    type = &quot;SystemAssigned&quot;
  }
}</code></pre><p>We need to output some variables which are required while creating infrastructure like <code>principal_id</code> for role assignment.</p><pre><code class="language-hcl">output &quot;public_ip&quot; {
  value = azurerm_public_ip.public_ip.ip_address
}

output &quot;vm_id&quot; {
  value = azurerm_linux_virtual_machine.example.id
}

output &quot;vm_pricipal_id&quot; {
  value = azurerm_linux_virtual_machine.example.identity[0].principal_id
}</code></pre><p>Now, use the above module &amp; create a virtual machine but make sure to declare all variables used in that module.</p><pre><code class="language-hcl">module &quot;vm&quot; {
    source = &quot;./vm/&quot;
    
    resource_group_name = azurerm_resource_group.example.name
    location            = azurerm_resource_group.example.location
    public_key_path     = &quot;&lt;public_key_path&gt;&quot;
    name                = &quot;demo&quot;
    subnet_id           = azurerm_subnet.public_subnet.id
}</code></pre><h2 id="creating-storage-account">Creating Storage Account</h2><p>Module for the storage account</p><pre><code class="language-hcl">resource &quot;azurerm_storage_account&quot; &quot;storage&quot; {
  name                     = format(&quot;%s%s&quot;, var.name, &quot;storage9553&quot;)
  resource_group_name      = var.resource_group_name
  location                 = var.location
  account_tier             = &quot;Standard&quot;
  account_replication_type = &quot;LRS&quot;

  network_rules {
      default_action = &quot;Deny&quot;
      ip_rules = var.white_list_ip
      virtual_network_subnet_ids = var.whitelist_subnet_ids
  }

  tags = {
    environment = &quot;staging&quot;
  }
}</code></pre><p>Let&apos;s output some variables like <code>storage_account_id</code></p><pre><code class="language-hcl">output &quot;storage_account_id&quot; {
    value = azurerm_storage_account.storage.id
}</code></pre><p>Using storage account module &amp; create infrastructure &amp; also create 2 blob storage containers. Make sure to add your IP to the list <code>white_list_ip</code> else terraform unable t create containers. </p><pre><code class="language-hcl">module &quot;storage_account&quot; {
    source = &quot;./storageaccount&quot;

    resource_group_name = azurerm_resource_group.example.name
    location            = azurerm_resource_group.example.location
    name                = &quot;demo12&quot;
    # whitelist ip of machine from which terraform creating infra
    # else terraform apply will fail with 403
    white_list_ip        = [&quot;106.210.242.214&quot;]
    whitelist_subnet_ids = [azurerm_subnet.public_subnet.id]
}

resource &quot;azurerm_storage_container&quot; &quot;container&quot; {
  name                  = &quot;demo&quot;
  storage_account_name  = module.storage_account.storage_account_name
  container_access_type = &quot;private&quot;
  depends_on = [
    module.storage_account
  ]
}

resource &quot;azurerm_storage_container&quot; &quot;container2&quot; {
  name                  = &quot;demo1&quot;
  storage_account_name  = module.storage_account.storage_account_name
  container_access_type = &quot;private&quot;
  depends_on = [
    module.storage_account
  ]
}</code></pre><h2 id="role-assignments">Role Assignments</h2><p>We will assign <code>Reader</code> role to storage account &amp; <code>Storage Blob Data Contributor</code> role to <code>demo1</code> container. This means we can list containers in the storage account but are only able to write in <code>demo1</code> container. The above 2 roles are already defined by Azure so no need to create them. </p><pre><code class="language-hcl">
# Read role for storage account
resource &quot;azurerm_role_assignment&quot; &quot;storage&quot; {
    scope = module.storage_account.storage_account_id

    # using azure defined role
    role_definition_name = &quot;Reader&quot;

    principal_id = module.vm.vm_pricipal_id
}

# Write role for container
resource &quot;azurerm_role_assignment&quot; &quot;container&quot; {
    scope = azurerm_storage_container.container2.resource_manager_id

    # using azure defined role
    role_definition_name = &quot;Storage Blob Data Contributor&quot;

    principal_id = module.vm.vm_pricipal_id
}</code></pre><p>When you apply the above terraform code, it will create 13 resources. </p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://avabodha.in/content/images/2022/01/image-2.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="1766" height="694" srcset="https://avabodha.in/content/images/size/w600/2022/01/image-2.png 600w, https://avabodha.in/content/images/size/w1000/2022/01/image-2.png 1000w, https://avabodha.in/content/images/size/w1600/2022/01/image-2.png 1600w, https://avabodha.in/content/images/2022/01/image-2.png 1766w" sizes="(min-width: 1200px) 1200px"></figure><h2 id="checking-access-from-vm">Checking Access from VM</h2><p>Connect to the virtual machine using SSH</p><pre><code class="language-bash">ssh -i &lt;pvt_key&gt; adminuser@&lt;vm_ip&gt;</code></pre><p>Install <code>az CLI</code> in that machine. It depends on your machine&apos;s OS. If you used the same OS as I used then check <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt">https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt</a></p><h4 id="login">Login</h4><pre><code class="language-bash"># login using identity
az login --identity</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/01/image-3.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="766" height="431" srcset="https://avabodha.in/content/images/size/w600/2022/01/image-3.png 600w, https://avabodha.in/content/images/2022/01/image-3.png 766w" sizes="(min-width: 720px) 720px"></figure><h4 id="list-containers">List containers</h4><pre><code class="language-bash"># list containers
# make sure that --auth-mode is login
az storage container list --account-name demo12storage9553 --auth-mode login</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/01/image-4.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="977" height="989" srcset="https://avabodha.in/content/images/size/w600/2022/01/image-4.png 600w, https://avabodha.in/content/images/2022/01/image-4.png 977w" sizes="(min-width: 720px) 720px"></figure><h4 id="upload-file-in-demo1">Upload File in <code>demo1</code></h4><p>First create one file.</p><pre><code class="language-bash">echo &quot;Hello World&quot; &gt; hello.txt</code></pre><p>Now, upload <code>hello.txt</code> to <code>demo1</code></p><pre><code class="language-bash"># upload to demo2.
az storage blob upload --account-name demo12storage9553 --container-name demo1 --name hello-world.txt --file hello.txt --auth-mode login</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/01/image-5.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="1579" height="165" srcset="https://avabodha.in/content/images/size/w600/2022/01/image-5.png 600w, https://avabodha.in/content/images/size/w1000/2022/01/image-5.png 1000w, https://avabodha.in/content/images/2022/01/image-5.png 1579w" sizes="(min-width: 720px) 720px"></figure><h4 id="upload-file-in-demo">Upload file in <code>demo</code></h4><p>VM doesn&apos;t have write access to <code>demo</code> the container. Let&apos;s try to upload file in <code>demo</code></p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2022/01/image-6.png" class="kg-image" alt="Accessing Azure Storage Account from VM using System Assigned Identity &amp; Roles" loading="lazy" width="946" height="425" srcset="https://avabodha.in/content/images/size/w600/2022/01/image-6.png 600w, https://avabodha.in/content/images/2022/01/image-6.png 946w" sizes="(min-width: 720px) 720px"></figure><h3 id="the-end">The End</h3><p>Make sure to destroy all created resources. </p>]]></content:encoded></item><item><title><![CDATA[Cost-efficient logging solution in Azure using Event hub and Azure Data Explorer]]></title><description><![CDATA[<p>Storing and managing logs is one of the important activities in software companies. Because using logs you can find root causes of multiple things like bugs, downtime, cyber attack, etc. Different cloud providers have their own managed logging and monitoring systems like AWS have cloud-watch, cloud-trail, etc and Azure has</p>]]></description><link>https://avabodha.in/cost-efficient-logging-solution-in-azure-using-event-hub-and-azure-data-explorer/</link><guid isPermaLink="false">61c58a38af34b505743798c2</guid><category><![CDATA[azure]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Fri, 24 Dec 2021 16:56:15 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/12/eh-adx-logging2.drawio-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/12/eh-adx-logging2.drawio-1.png" alt="Cost-efficient logging solution in Azure using Event hub and Azure Data Explorer"><p>Storing and managing logs is one of the important activities in software companies. Because using logs you can find root causes of multiple things like bugs, downtime, cyber attack, etc. Different cloud providers have their own managed logging and monitoring systems like AWS have cloud-watch, cloud-trail, etc and Azure has application insight. But suppose you have hundreds of microservices inside the Kubernetes cluster and you want easy to use, easy to set up logging system which will work exactly the same for all micro-services. You can use fluentd but in this article, I will show you how to use Event hub and Azure Data Explorer to collect and access logs in Azure.</p><p>As you can see in the feature image, your application which can be deployed in whether app service, Azure Kubernetes Service, or Virtual Machine will push logs to the Event hub and then Azure data explorer will do storing and querying job for you. In this article, I will run my simple flask application locally and it will push logs to the event hub which we access using azure data explorer.</p><!--kg-card-begin: markdown--><p>Plan of action,</p>
<ol>
<li>Create infrastructure using terraform</li>
<li>Writing a small flask web app</li>
<li>checking logs in ADX</li>
</ol>
<!--kg-card-end: markdown--><p>You can find all code used in this article at <a href="https://github.com/lets-learn-it/terraform-learning/tree/azure/06-eh-adx-logging">https://github.com/lets-learn-it/terraform-learning/tree/azure/06-eh-adx-logging</a></p><h2 id="creating-infrastructure">Creating Infrastructure</h2><p>I am using terraform version <code>Terraform v1.0.11</code> Add Azure provider and make sure to use the version of azurem &gt; &quot;2.88.1&quot; because we are using <code>$Default</code> in <code>azurerm_kusto_eventhub_data_connection</code></p><pre><code class="language-hcl">terraform {
  required_providers {
    azurerm = {
      source  = &quot;hashicorp/azurerm&quot;
      version = &quot;2.88.1&quot;
    }
  }
}

provider &quot;azurerm&quot; {
  features {}
}</code></pre><p>Now create one resource group in which all our resources will reside. Like below</p><pre><code class="language-hcl">resource &quot;azurerm_resource_group&quot; &quot;logs_rg&quot; {
  name     = var.resource_group
  location = &quot;East US&quot;
}</code></pre><p>Now create an event hub namespace and event hub in that namespace. We will use the default consumer group for this example.</p><pre><code class="language-hcl">resource &quot;azurerm_eventhub_namespace&quot; &quot;eh_namespace&quot; {
  name                = var.eh_namespace
  location            = azurerm_resource_group.logs_rg.location
  resource_group_name = azurerm_resource_group.logs_rg.name
  sku                 = &quot;Standard&quot;
  capacity            = 1
  zone_redundant      = true

  tags = var.tags
}

resource &quot;azurerm_eventhub&quot; &quot;eh&quot; {
  name                = var.eh_name
  namespace_name      = azurerm_eventhub_namespace.eh_namespace.name
  resource_group_name = var.resource_group
  partition_count     = 1
  message_retention   = 1
}

data &quot;azurerm_eventhub_consumer_group&quot; &quot;default&quot; {
  name                = &quot;$Default&quot;
  namespace_name      = azurerm_eventhub_namespace.eh_namespace.name
  eventhub_name       = azurerm_eventhub.eh.name
  resource_group_name = var.resource_group
}</code></pre><p>We need an Azure data explorer cluster and database in it to store all logs. </p><pre><code class="language-hcl">resource &quot;azurerm_kusto_cluster&quot; &quot;adx&quot; {
  name                = var.adx_cluster
  location            = azurerm_resource_group.logs_rg.location
  resource_group_name = azurerm_resource_group.logs_rg.name
  engine              = &quot;V3&quot;
  double_encryption_enabled  = var.double_encryption

  sku {
    name     = var.adx_sku_name
    capacity = var.adx_sku_capacity
  }

  tags = var.tags
}

resource &quot;azurerm_kusto_database&quot; &quot;database&quot; {
  name                = var.adx_database
  resource_group_name = var.resource_group
  location            = azurerm_resource_group.logs_rg.location
  cluster_name        = azurerm_kusto_cluster.adx.name
  hot_cache_period    = var.hot_cache_period
  soft_delete_period  = var.soft_delete_period
}</code></pre><p>Create variable find and add all variables we are using till now. </p><pre><code class="language-hcl">variable &quot;resource_group&quot; {
    description = &quot;where we place event hub and azure data explorer&quot;
    type = string
}

variable &quot;adx_cluster&quot; {
    description = &quot;name of adx cluster&quot;
    type = string
}

variable &quot;adx_database&quot; {
    type = string
    description = &quot;name of adx dataset&quot;
}

variable &quot;eh_namespace&quot; {
    type = string
    description = &quot;name of event hub namespace&quot;
}

variable &quot;eh_name&quot; {
    type = string
    description = &quot;name of event hub&quot;
}

variable &quot;double_encryption&quot; {
    type = bool
}

variable &quot;hot_cache_period&quot; {
    type = string
    description = &quot;data will be cached for this no of days&quot;
}

variable &quot;soft_delete_period&quot; {
    type = string
    description = &quot;after these no of days data will be deleted&quot;
}

variable &quot;adx_sku_name&quot; {
    type = string
    description = &quot;type of adx cluster&quot;
}

variable &quot;adx_sku_capacity&quot; {
    type = string
}

variable &quot;tags&quot; {
    type = map(string)
}

variable &quot;adx_eh_connection_name&quot; {
    type = string
}

variable &quot;adx_db_table_name&quot; {
    type = string
}

variable &quot;ingestion_mapping_rule_name&quot; {
    type = string
}

variable &quot;eh_message_format&quot; {
    type = string
    default = &quot;JSON&quot;
}</code></pre><p>create <code>terraform.tfvars</code> file for providing values to all variables used,</p><pre><code class="language-hcl">resource_group = &quot;eh-adx-logs&quot;

adx_cluster = &quot;logscluster&quot;
adx_database = &quot;logsdb&quot;
double_encryption = true
hot_cache_period = &quot;P31D&quot;
soft_delete_period = &quot;P365D&quot;
adx_sku_name = &quot;Standard_D11_v2&quot;
adx_sku_capacity = 2

eh_namespace = &quot;logseventhubns&quot;
eh_name = &quot;logs_eventhub&quot;

adx_eh_connection_name = &quot;adxehconn&quot;
adx_db_table_name = &quot;logs_table&quot;
ingestion_mapping_rule_name = &quot;logs_table_json_ingestion_mapping&quot;

tags = {
    &quot;environment&quot;: &quot;prod&quot;
}</code></pre><p>To create these resources run a plan and apply. &#xA0;You will see, terraform is creating 5 resources. (Azure data cluster took 15 min for me &#x1F612;)</p><pre><code class="language-sh">terraform plan

# then run
terraform apply</code></pre><p>Currently, terraform does not support creating tables in the database and mapping for tables so we will create it manually. </p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/12/adx_mappings.PNG" class="kg-image" alt="Cost-efficient logging solution in Azure using Event hub and Azure Data Explorer" loading="lazy" width="1920" height="919" srcset="https://avabodha.in/content/images/size/w600/2021/12/adx_mappings.PNG 600w, https://avabodha.in/content/images/size/w1000/2021/12/adx_mappings.PNG 1000w, https://avabodha.in/content/images/size/w1600/2021/12/adx_mappings.PNG 1600w, https://avabodha.in/content/images/2021/12/adx_mappings.PNG 1920w" sizes="(min-width: 720px) 720px"></figure><p>To open query editor, go to Azure data explorer cluster in Azure dashboard, and on the left side, you can find <strong>databases</strong>. In databases, you can find our database, <code>logsdb</code>. double click on it then select <strong>Query. </strong> In that run following table creation query. </p><pre><code class="language-sql">.create table logs_table ( 
	level:string, 
    message:string, 
    loggerName:string, 
    exception:string, 
    applicationName:string, 
    processName:string, 
    processID:string, 
    threadName:string, 
    threadID:string, 
    timestamp:datetime
)</code></pre><p>To create ingestion mapping, run the following query </p><pre><code class="language-sql">.create table logs_table ingestion json mapping &apos;logs_table_json_ingestion_mapping&apos; 
&apos;[{&quot;column&quot;:&quot;level&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.level&quot;}},{&quot;column&quot;:&quot;message&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.message&quot;}},{&quot;column&quot;:&quot;loggerName&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.loggerName&quot;}},{&quot;column&quot;:&quot;exception&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.exception&quot;}},{&quot;column&quot;:&quot;applicationName&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.applicationName&quot;}},{&quot;column&quot;:&quot;processName&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.processName&quot;}},{&quot;column&quot;:&quot;processID&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.processID&quot;}},{&quot;column&quot;:&quot;threadName&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.threadName&quot;}},{&quot;column&quot;:&quot;threadID&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.threadID&quot;}},{&quot;column&quot;:&quot;timestamp&quot;,&quot;Properties&quot;:{&quot;path&quot;:&quot;$.timestamp&quot;}}]&apos;</code></pre><p>After creating the table and mapping, we need to create a connection between the event hub and the azure data explorer. While creating infrastructure using terraform, we didn&apos;t create it because it need a table and mapping. Now add the following resources in Terraform and run the plan and apply again.</p><pre><code class="language-hcl">resource &quot;azurerm_kusto_eventhub_data_connection&quot; &quot;eventhub_connection&quot; {
  name                = var.adx_eh_connection_name
  resource_group_name = var.resource_group
  location            = azurerm_resource_group.logs_rg.location
  cluster_name        = azurerm_kusto_cluster.adx.name
  database_name       = azurerm_kusto_database.database.name

  eventhub_id    = azurerm_eventhub.eh.id
  consumer_group = data.azurerm_eventhub_consumer_group.default.name

  table_name        = var.adx_db_table_name
  mapping_rule_name = var.ingestion_mapping_rule_name
  data_format       = var.eh_message_format
}</code></pre><h2 id="flask-application">Flask Application</h2><p>To push logs to the event hub, we need <code>EventhubHandler</code> for python logging library. Install it using pip as follow</p><pre><code class="language-sh">pip install EventhubHandler</code></pre><p>Now, import required packages and create a logger. Make sure to use <code>JSONFormatter</code> it because our mapping is expecting JSON from the event hub. I am creating a root level logger so that all logs in all modules will go to the event hub.</p><pre><code class="language-python">from flask import Flask
import logging
from EventhubHandler.handler import EventHubHandler
from EventhubHandler.formatter import JSONFormatter
app = Flask(__name__)

logger = logging.getLogger()

eh = EventHubHandler()
eh.setLevel(logging.DEBUG)

# format will be depends on what you choose at adx
# I am using JSON
formatter = JSONFormatter({&quot;level&quot;: &quot;levelname&quot;, 
                            &quot;message&quot;: &quot;message&quot;, 
                            &quot;loggerName&quot;: &quot;name&quot;, 
                            &quot;processName&quot;: &quot;processName&quot;,
                            &quot;processID&quot;: &quot;process&quot;, 
                            &quot;threadName&quot;: &quot;threadName&quot;, 
                            &quot;threadID&quot;: &quot;thread&quot;,
                            &quot;timestamp&quot;: &quot;asctime&quot;,
                            &quot;exception&quot;: &quot;exc_info&quot;,
                            &quot;applicationName&quot;: &quot;&quot;})
eh.setFormatter(formatter)
logger.addHandler(eh)</code></pre><p>Write some endpoints so that we can test our flask application. I created <code>/exception</code> endpoint to check how exceptions are getting logged.</p><pre><code class="language-python">@app.route(&quot;/&quot;)
def hello_world():
    logger.info(&quot;inside hello world&quot;)
    return &quot;&lt;p&gt;Hello, World!&lt;/p&gt;&quot;

@app.get(&quot;/exception&quot;)
def exception():
    try:
        x = 1 / 0
    except ZeroDivisionError as e:
        logger.exception(&apos;ZeroDivisionError: {0}&apos;.format(e))
    return &quot;Exception Occured&quot;

if __name__ == &apos;__main__&apos;:
    app.run(host=&quot;0.0.0.0&quot;)</code></pre><p>Save all flask code in one file &amp; name it <code>main.py</code>. &#xA0;And make sure to set 3 environment variables as follow (for Linux based system),</p><pre><code class="language-sh">export applicationName=&quot;my-log-app&quot;
export eh_ns_connection_string=&lt;event hub namespace connection string&gt;
export eventhub_name=&quot;logs_eventhub&quot;</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://avabodha.in/content/images/2021/12/image.png" class="kg-image" alt="Cost-efficient logging solution in Azure using Event hub and Azure Data Explorer" loading="lazy" width="1917" height="920" srcset="https://avabodha.in/content/images/size/w600/2021/12/image.png 600w, https://avabodha.in/content/images/size/w1000/2021/12/image.png 1000w, https://avabodha.in/content/images/size/w1600/2021/12/image.png 1600w, https://avabodha.in/content/images/2021/12/image.png 1917w" sizes="(min-width: 720px) 720px"><figcaption>Event hub primary connection string</figcaption></figure><p>Run application using the following command,</p><pre><code class="language-sh">python main.py</code></pre><h3 id="checking-logs">Checking logs</h3><p>To check logs, we need to run query in that same query editor. We set <code>applicationName=my-log-app</code>. Now using this we will get last 20 min logs</p><pre><code class="language-sla">logs_table
| where applicationName contains &quot;my-log-app&quot;
| where timestamp &gt; ago(20m)</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/12/log2.PNG" class="kg-image" alt="Cost-efficient logging solution in Azure using Event hub and Azure Data Explorer" loading="lazy" width="1920" height="920" srcset="https://avabodha.in/content/images/size/w600/2021/12/log2.PNG 600w, https://avabodha.in/content/images/size/w1000/2021/12/log2.PNG 1000w, https://avabodha.in/content/images/size/w1600/2021/12/log2.PNG 1600w, https://avabodha.in/content/images/2021/12/log2.PNG 1920w" sizes="(min-width: 720px) 720px"></figure><h2 id="when-to-use-when-not">When to use &amp; when not</h2><!--kg-card-begin: markdown--><ol>
<li>If you have 100s of microservices then only. My team is using this and we have 40 microservices. The surprising thing is we have never gone above 10% Azure data cluster utilization.</li>
<li>If you have fewer services then it will be too costly per service. check pricing <a href="https://azure.microsoft.com/en-in/pricing/details/data-explorer/#pricing">https://azure.microsoft.com/en-in/pricing/details/data-explorer/#pricing</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Detailed Compilation Process with C Program Example]]></title><description><![CDATA[We will see the journey of the c program from source code to executable with help of the GCC compiler.]]></description><link>https://avabodha.in/detailed-compilation-process-with-c-example/</link><guid isPermaLink="false">61a22b6f6ea449aa834bf363</guid><category><![CDATA[compiler]]></category><category><![CDATA[c-programming]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sat, 27 Nov 2021 15:55:25 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/11/compilation.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/11/compilation.jpg" alt="Detailed Compilation Process with C Program Example"><p>We will see the journey of the c program from source code to executable with help of the GCC compiler. We will see the input and output of all 4 steps involved in the compilation process.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/Untitled-Diagram.drawio.png" class="kg-image" alt="Detailed Compilation Process with C Program Example" loading="lazy" width="388" height="685"></figure><!--kg-card-begin: markdown--><p>The plan of action for this blog will be as follows,</p>
<ol>
<li>Source c code</li>
<li>Preprocessing</li>
<li>Compiling</li>
<li>Assembling</li>
<li>Linking</li>
</ol>
<!--kg-card-end: markdown--><p>The compilation is 2nd step of this process and this process is also called compilation overall. &#x1F615;</p><h2 id="source-c-code">Source C Code</h2><p>We will create 2 libraries named <strong>simple_math </strong>and <strong>algebra</strong>. <strong>simple_math</strong> consists of 2 functions <strong>sum </strong>and <strong>minus. </strong>while <strong>algebra</strong> consists of a single method <strong>evaluate</strong>. you can find the source code in the GitHub repository <a href="https://github.com/lets-learn-it/c-learning/tree/master/99-simple_math-and-algebra">link</a>.</p><!--kg-card-begin: html--><div class="tab">
    <button class="tablinks" onclick="openCity(event, &apos;main.c&apos;)" id="defaultOpen">main.c</button>
    <button class="tablinks" onclick="openCity(event, &apos;algebra.c&apos;)">algebra.c</button>
    <button class="tablinks" onclick="openCity(event, &apos;algebra.h&apos;)">algebra.h</button>
  <button class="tablinks" onclick="openCity(event, &apos;simple_math.c&apos;)">simple_math.c</button>
    <button class="tablinks" onclick="openCity(event, &apos;simple_math.h&apos;)">simple_math.h</button>
</div>

<div id="simple_math.h" class="tabcontent">
<pre class="tab_pre">
<code class="language-c">#ifndef __SIMPLE_MATH__
#define __SIMPLE_MATH__

int sum(int a, int b);

int minus(int a, int b);

#endif</code>
</pre>
</div>

<div id="algebra.c" class="tabcontent">
<pre class="tab_pre">
<code class="language-c">#include&quot;algebra.h&quot;
#include&lt;stdio.h&gt;
#include&lt;string.h&gt;

int evaluate(char exp[]) {
  int length = strlen(exp);

  // considering 0+exp
  int left = 0, right = 0;

  // function pointer so that 
  // we can call correct function 
  int (*last_sign)(int, int) = &amp;sum;

  for(int i=0;i&lt;length;i++){
    
    if(exp[i] == &apos;+&apos;) {
      left = (*last_sign)(left, right);
      right = 0;
      last_sign = &amp;sum;
    } else if(exp[i] == &apos;-&apos;) {
      left = (*last_sign)(left, right);
      right = 0;
      last_sign = &amp;minus;
    } else {
      right = (exp[i] - &apos;0&apos;) + (right * 10);
    }
  }
  
  return (*last_sign)(left, right);
}</code>
</pre>
</div>

<div id="main.c" class="tabcontent">
<pre class="tab_pre">
<code class="language-c">#include&quot;algebra.h&quot;
#include&quot;simple_math.h&quot;

#include&lt;stdio.h&gt;

#define &quot;12-19+33+57&quot;

int main(){
  printf(&quot;12-19+33+57=%d\n&quot;,evaluate(EXP));

  printf(&quot;Sum of 2 and 3: %d&quot;, sum(2, 3));
  return 0;
}
</code>
</pre>
</div>

<div id="simple_math.c" class="tabcontent">
<pre class="tab_pre">
<code class="language-c">int sum(int a, int b) {
  return a + b;
}

int minus(int a, int b) {
  return a - b;
}</code>
</pre>
</div>
<div id="algebra.h" class="tabcontent">
<pre class="tab_pre">
<code class="language-c">#ifndef __ALGEBRA__
#define __ALGEBRA__

#include&quot;simple_math.h&quot;

int evaluate(char a[]);

#endif</code>
</pre>
</div>
<!--kg-card-end: html--><h2 id="preprocessing">Preprocessing</h2><p>At preprocessing stage, header files will get added recursively. This means in our example, <code>algebra.h</code> and <code>simple_math.h</code> will get added in the first pass and in the next pass, it will add <code>simple_math.h</code> coming from <code>algebra.h</code> recursively. It will also place value <code>EXP</code> wherever it is present.</p><p>Preprocessor processes include files, conditional compilation instructions, and macros. You can preprocess our code in GCC as follow,</p><pre><code class="language-shell"># -E Preprocess only; do not compile, assemble or link.
gcc -E main.c -o main_pre.c</code></pre><p>You can do the same thing with <code>simple_math.c</code> &amp; <code>algebra.c</code> and create files <code>simple_math_pre.c</code> &amp; <code>algebra_pre.c</code> &#xA0;respectively.</p><h2 id="compilation">Compilation</h2><p>In this stage, we create assembly code from preprocessed files.</p><pre><code class="language-shell"># -S Compile only; do not assemble or link.
# this will create assembly code
gcc -S main_pre.c -o main.s</code></pre><p>You can open <code>main.s</code> in any text editor and check assembly code generated. Create similar assembly code files for the other 2 source files.</p><h2 id="assembling">Assembling</h2><p>During this stage, an assembler is used to translate the assembly instructions to object code. The output consists of actual instructions to be run by the target processor.</p><pre><code class="language-shell"># -c Compile and assemble, but do not link.
# if already compiled then only assemble
gcc -c main.c -o main.o

# we cant see content of object file
# seeing object file
# elf format executable and linkable format
objdump -D main.o</code></pre><h2 id="linking">Linking</h2><p>It takes one or more files or libraries as input and combines to produce a single executable file. In this stage, it resolves references to external symbols, assigns final addresses to procedures/functions and variables, and variables, and revises code and data to reflect new addresses (a process called relocation) </p><pre><code class="language-shell"># link all object file and libraries
# and create single executable
gcc main.o algebra.o simple_math.o -o main</code></pre><h4 id="linking-static-dynamic-libraries">Linking static &amp; dynamic libraries</h4><p>This step is not necessary for our example because we are not using any external library.</p><p>If you are using static libraries then you can link these static libraries as follow,</p><pre><code class="language-shell"># -L is for specifyig path of library
# in our case in same folder 
# ldll means libdll.a
# you can directly user libdll.a instead -ldll
gcc &lt;application_object_files&gt; -L . -ldll
gcc &lt;application_object_files&gt; -L . libdll.a</code></pre><p>Or if you are using dynamic or shared libraries then you can link as follows</p><pre><code class="language-shell"># ldll is libdll library
# it should present in /usr/lib folder
gcc &lt;application_object_files&gt; -ldll</code></pre><h3 id="references">References</h3><!--kg-card-begin: markdown--><p><a href="https://www.hackerearth.com/practice/notes/what-happens-when-a-c-program-runs/">What Really Happens when a C program runs?</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Creating private endpoint for Azure storage account using Terraform]]></title><description><![CDATA[In this blog, we will create a private endpoint for the storage account (blob storage) using terraform. Also, I will show how to access blob storage using Azure CLI from the virtual machine.]]></description><link>https://avabodha.in/creating-private-endpoint-for-storage-account-using-terraform/</link><guid isPermaLink="false">617fd7506ea449aa834bf146</guid><category><![CDATA[cloud]]></category><category><![CDATA[terraform]]></category><category><![CDATA[azure]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Mon, 01 Nov 2021 13:57:50 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/11/private-endpoint.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://avabodha.in/content/images/2021/11/private-endpoint.png" alt="Creating private endpoint for Azure storage account using Terraform"><p><strong>According to Microsoft</strong>, An Azure storage account contains all of your Azure Storage data objects: blobs, file shares, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that&apos;s accessible from anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure, and massively scalable. <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview?toc=/azure/storage/blobs/toc.json">Read more...</a></p>
</blockquote>
<!--kg-card-end: markdown--><p>For this blog, we are concerned with only blob storage. We will create a storage account, and one container in it. And also we will create one virtual machine using which we can access the storage account (container). Storage account can&apos;t be accessible from the public internet and also we are not whitelisting the subnet in which the virtual machine resides.</p><p>All code used in this blog is available at <a href="https://github.com/lets-learn-it/terraform-learning/tree/azure/05-private-endpoint"><a href="https://github.com/lets-learn-it/terraform-learning/tree/azure/05-private-endpoint">https://github.com/lets-learn-it/terraform-learning/tree/azure/05-private-endpoint</a></a></p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2023/05/storage_account_with_private_endpoint.drawio.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy" width="852" height="663" srcset="https://avabodha.in/content/images/size/w600/2023/05/storage_account_with_private_endpoint.drawio.png 600w, https://avabodha.in/content/images/2023/05/storage_account_with_private_endpoint.drawio.png 852w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><p>We will create all infrastructure in 4 steps.</p>
<ol>
<li><a href="#resource-group-v-net-and-subnets">resource group, virtual network, and subnets</a></li>
<li><a href="#the-virtual-machine-in-the-public-subnet">a virtual machine in the public subnet</a></li>
<li><a href="#storage-account-with-one-container">storage account with one container</a></li>
<li><a href="#dns-zone-and-private-endpoint">DNS zone and private endpoint</a></li>
</ol>
<!--kg-card-end: markdown--><h3 id="directory-structure">Directory structure</h3><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy" width="394" height="448"></figure><h2 id="resource-group-v-net-and-subnets">Resource group, V-net, and subnets</h2><p>We need one variable, i.e. resource group name. let&apos;s create it first. I am hardcoding the value of it.</p><pre><code class="language-hcl">variable &quot;resource_group_name&quot; {
  type = string
  default = &quot;qwerty12344321&quot;
}</code></pre><p>All resources will be created in a single resource group for sake of simplisity. You can create different resource groups if you want.</p><pre><code class="language-hcl">resource &quot;azurerm_resource_group&quot; &quot;example&quot; {
  name     = var.resource_group_name
  location = &quot;East US&quot;
}</code></pre><p>Now we need a virtual network and 2 subnets inside it. For one of the subnets, set flag <code>enforce_private_link_endpoint_network_policies</code> to <code>true</code>. It is necessary to create a private endpoint in that subnet.</p><pre><code class="language-hcl">resource &quot;azurerm_virtual_network&quot; &quot;example&quot; {
  name                = &quot;example-network&quot;
  address_space       = [&quot;10.0.0.0/16&quot;]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

# we will create VM in this subnet
resource &quot;azurerm_subnet&quot; &quot;public_subnet&quot; {
  name                 = &quot;public_subnet&quot;
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = [&quot;10.0.1.0/24&quot;]
}

# we will create private endpoint in this subnet
resource &quot;azurerm_subnet&quot; &quot;endpoint_subnet&quot; {
  name                 = &quot;endpoint_subnet&quot;
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = [&quot;10.0.2.0/24&quot;]

  enforce_private_link_endpoint_network_policies = true
}</code></pre><h2 id="the-virtual-machine-in-the-public-subnet">The virtual machine in the public subnet</h2><p>Now, we will write a module to create a virtual machine. We will place it in the public subnet and give it public IP so that we can connect it using SSH. for creating a module, create a directory named <code>vm</code> and start creating files in it.</p><p>Create public IP first, so that we can attach it to the network interface.</p><pre><code class="language-hcl">resource &quot;azurerm_public_ip&quot; &quot;public_ip&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;ip&quot;)
  resource_group_name = var.resource_group_name
  location            = var.location
  allocation_method   = &quot;Dynamic&quot;
}</code></pre><p>Now we can create a network interface and attach public IP (previously created) to it.</p><pre><code class="language-hcl">resource &quot;azurerm_network_interface&quot; &quot;example&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;network_interface&quot;)
  location            = var.location
  resource_group_name = var.resource_group_name

  ip_configuration {
    name                          = &quot;internal&quot;
    subnet_id                     = var.subnet_id
    private_ip_address_allocation = &quot;Dynamic&quot;
    public_ip_address_id = azurerm_public_ip.public_ip.id
  }
}</code></pre><p>As the network interface is available to use, we will create an ubuntu server VM.</p><pre><code class="language-hcl">resource &quot;azurerm_linux_virtual_machine&quot; &quot;example&quot; {
  name                = format(&quot;%s%s&quot;, var.name, &quot;vm&quot;)
  resource_group_name = var.resource_group_name
  location            = var.location
  size                = &quot;Standard_B1s&quot;
  admin_username      = &quot;adminuser&quot;

  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  admin_ssh_key {
    username   = &quot;adminuser&quot;
    public_key = file(var.public_key_path)
  }

  os_disk {
    caching              = &quot;ReadWrite&quot;
    storage_account_type = &quot;Standard_LRS&quot;
  }

  source_image_reference {
    publisher = &quot;Canonical&quot;
    offer     = &quot;UbuntuServer&quot;
    sku       = &quot;16.04-LTS&quot;
    version   = &quot;latest&quot;
  }

}</code></pre><p>We want to access this VM using SSH. For that, we need to add the network security group to the network interface. Let&apos;s create NSG and attach it to the already created network interface.</p><pre><code class="language-hcl">resource &quot;azurerm_network_security_group&quot; &quot;nsg&quot; {
  name                = format(&quot;%s_%s&quot;, var.name, &quot;nsg&quot;)
  location            = var.location
  resource_group_name = var.resource_group_name

  security_rule {
    name                       = &quot;allow_ssh_sg&quot;
    priority                   = 100 
    direction                  = &quot;Inbound&quot;
    access                     = &quot;Allow&quot;
    protocol                   = &quot;Tcp&quot;
    source_port_range          = &quot;*&quot;
    destination_port_range     = &quot;22&quot;
    source_address_prefix      = &quot;*&quot;
    destination_address_prefix = &quot;*&quot;
  }

  depends_on = [
    azurerm_network_interface.example
  ]
}

resource &quot;azurerm_network_interface_security_group_association&quot; &quot;association&quot; {
  network_interface_id      = azurerm_network_interface.example.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}</code></pre><p>As <code>vm</code> is a module and we will be passing variables while using it. Let&apos;s create these variables so that we can create a reusable module.</p><pre><code class="language-hcl">variable &quot;resource_group_name&quot; {
  type = string
}

variable &quot;public_key_path&quot; {
  type = string
}

variable &quot;name&quot; {
  type = string
}

variable &quot;subnet_id&quot; {
  type = string
}

variable &quot;location&quot; {
  type = string
}</code></pre><p>This <code>vm</code> module will output <code>public_ip</code> of the virtual machine.</p><pre><code class="language-hcl">output &quot;public_ip&quot; {
  value = azurerm_public_ip.public_ip.ip_address
}</code></pre><p>Our module for the virtual machines is ready. Let&apos;s use it. </p><pre><code class="language-hcl">module &quot;vm&quot; {
	# using it from outside of vm directory
    source = &quot;./vm/&quot;
    
    resource_group_name = azurerm_resource_group.example.name
    location = azurerm_resource_group.example.location
    
    # make sure you have public key at this location
    public_key_path = &quot;C:/Users/wv3cxq/.ssh/id_rsa.pub&quot;
    name = &quot;demo&quot;
    subnet_id = azurerm_subnet.public_subnet.id
}</code></pre><h2 id="storage-account-with-one-container">Storage account with one container</h2><p>Now, we will write a module to create a storage account. for creating a module, create a directory named <code>vm</code> and start creating files in it.</p><pre><code class="language-hcl">resource &quot;azurerm_storage_account&quot; &quot;storage&quot; {
  name                     = format(&quot;%s%s&quot;, var.name, &quot;storage9553&quot;)
  resource_group_name      = var.resource_group_name
  location                 = var.location
  account_tier             = &quot;Standard&quot;
  account_replication_type = &quot;GRS&quot;

  network_rules {
      default_action = &quot;Deny&quot;
      ip_rules = var.white_list_ip
  }
}</code></pre><p>As we used variables in the virtual machine module, this module also needs some variables.</p><pre><code class="language-hcl">variable &quot;resource_group_name&quot; {
  type = string
}

variable &quot;name&quot; {
  type = string
}

variable &quot;location&quot; {
  type = string
}

variable &quot;white_list_ip&quot; {
  type = list(string)
  default = []
}</code></pre><p>We will output some values using which we can connect to the storage account using Azure CLI. <strong>while outputting these values in production, please mark them as <code>sensitive</code>.</strong></p><pre><code class="language-hcl">output &quot;primary_connection_string&quot; {
  value = azurerm_storage_account.storage.primary_connection_string
}

output &quot;storage_account_id&quot; {
    value = azurerm_storage_account.storage.id
}

output &quot;primary_access_key&quot; {
    value = azurerm_storage_account.storage.primary_access_key
}

output &quot;storage_account_name&quot; {
  value = azurerm_storage_account.storage.name
}</code></pre><p>Our module for the storage account is now complete. Let&apos;s use it to create a storage account and then we will create the container in it.</p><pre><code class="language-hcl">module &quot;storage_account&quot; {
    source = &quot;./storageaccount&quot;

    resource_group_name = azurerm_resource_group.example.name
    location = azurerm_resource_group.example.location
    name = &quot;demo&quot;
    
    white_list_ip = []
}

resource &quot;azurerm_storage_container&quot; &quot;container&quot; {
  name                  = &quot;demo&quot;
  storage_account_name  = module.storage_account.storage_account_name
  container_access_type = &quot;private&quot;
}</code></pre><p>Now we have a storage account with us. You can check it using a console or add your machine&apos;s public IP in <code>white_list_ip</code> and try to access it using Azure CLI. We will see How to access blob storage using Azure CLI? at last of this blog.</p><h2 id="dns-zone-and-private-endpoint">DNS zone and private endpoint</h2><p>Before creating a private endpoint, we need a private DNS zone so that we can create <code>a record</code> of private endpoint in this zone. We also need to link this zone to the virtual network (this is necessary else name resolution will fail).</p><pre><code class="language-hcl">resource &quot;azurerm_private_dns_zone&quot; &quot;example&quot; {
  name                = &quot;privatelink.blob.core.windows.net&quot;
  resource_group_name = azurerm_resource_group.example.name
}

resource &quot;azurerm_private_dns_zone_virtual_network_link&quot; &quot;network_link&quot; {
  name                  = &quot;vnet_link&quot;
  resource_group_name   = azurerm_resource_group.example.name
  private_dns_zone_name = azurerm_private_dns_zone.example.name
  virtual_network_id    = azurerm_virtual_network.example.id
}</code></pre><p>Now the most important part will start. module creation for the private endpoint. We will create <code>a record</code> in this module itself, so that we can achieve reusability of the module.</p><pre><code class="language-hcl">resource &quot;azurerm_private_endpoint&quot; &quot;endpoint&quot; {
  name                = format(&quot;%s-%s&quot;, var.name, &quot;private-endpoint&quot;)
  location            = var.location
  resource_group_name = var.resource_group_name
  subnet_id           = var.subnet_id

  private_service_connection {
    name                           = format(&quot;%s-%s&quot;, var.name, &quot;privateserviceconnection&quot;)
    private_connection_resource_id = var.private_link_enabled_resource_id
    is_manual_connection           = false
    subresource_names              = var.subresource_names
  }
}

resource &quot;azurerm_private_dns_a_record&quot; &quot;dns_a&quot; {
  name                = format(&quot;%s-%s&quot;, var.name, &quot;arecord&quot;)
  zone_name           = var.private_dns_zone_name
  resource_group_name = var.resource_group_name
  ttl                 = 300
  records             = [azurerm_private_endpoint.endpoint.private_service_connection.0.private_ip_address]
}</code></pre><p>Let&apos;s create variables and outputs for this module also.</p><pre><code class="language-hcl">variable &quot;resource_group_name&quot; {
  type = string
}

variable &quot;name&quot; {
  type = string
}

variable &quot;location&quot; {
  type = string
}

variable &quot;subnet_id&quot; {
  type = string
}

variable &quot;private_link_enabled_resource_id&quot; {
  type = string
}

variable &quot;private_dns_zone_name&quot; {
  type = string
}

variable &quot;subresource_names&quot; {
  type = list(string)
}</code></pre><p>We need a fully qualified domain name (FQDN) of a private endpoint to access the storage account. Let&apos;s output it.</p><pre><code class="language-hcl">output &quot;dns_a_record&quot; {
    value = azurerm_private_dns_a_record.dns_a.fqdn
}</code></pre><p>Use this module and create a private endpoint with <code>a record</code> in previously created private DNS zone.</p><pre><code class="language-hcl">module &quot;privateendpoint&quot; {
    source = &quot;./privateendpoint/&quot;

    resource_group_name = azurerm_resource_group.example.name
    location = azurerm_resource_group.example.location
    name = &quot;demo&quot;

    subnet_id = azurerm_subnet.endpoint_subnet.id
    private_link_enabled_resource_id = module.storage_account.storage_account_id
    private_dns_zone_name = azurerm_private_dns_zone.example.name
    
    # you can add other subresouce also
    subresource_names = [&quot;blob&quot;]

    depends_on = [
      azurerm_private_dns_zone.example
    ]
}</code></pre><p>Everything is in place except output values. We are outputting some values from the module but to get these, we need to output in the root directory also.</p><pre><code class="language-hcl">output &quot;dns_a_record&quot; {
    value = module.privateendpoint.dns_a_record
}

output &quot;primary_connection_string&quot; {
  value = module.storage_account.primary_connection_string
}

output &quot;storage_account_id&quot; {
    value = module.storage_account.storage_account_id
}

output &quot;public_ip&quot; {
    value = module.vm.public_ip
}

output &quot;storage_primary_access_key&quot; {
    value = module.storage_account.primary_access_key
}</code></pre><h2 id="creating-resources">Creating resources</h2><p>First, run the following command to import all plugins and modules.</p><pre><code class="language-sh">terraform init</code></pre><p>To check what will get created? run plan. To create a plan, run the following command.</p><pre><code class="language-sh">terraform plan</code></pre><p>Check plan, if everything is OK, you are free to create all resources with a single command.</p><pre><code class="language-sh">terraform apply</code></pre><p>While creating resources using the apply command, you may get errors like below. This is coming because we are not allowing our machine which is running terraform apply to access our storage account. That&apos;s why terraform is not able to create containers. You can add your public IP in <code>white_list_ip</code> list.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-1.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy"></figure><p>All resources got created. after adding public IP to <code>white_list_ip</code> the list. A total of 15 resources is created. It is showing 1 added and 1 destroyed because I reran apply.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-7.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy"></figure><p>As you can see, I got public IP <code>20.119.70.83</code> &#xA0;as output. Let&apos;s connect it using SSH. but before that let me show you my storage account&apos;s networking from the console. I am allowing only access from my personal computer. Storage account should not be accessible to the virtual machine. But because of the private endpoint, It will access.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-3.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy" width="1333" height="596" srcset="https://avabodha.in/content/images/size/w600/2021/11/image-3.png 600w, https://avabodha.in/content/images/size/w1000/2021/11/image-3.png 1000w, https://avabodha.in/content/images/2021/11/image-3.png 1333w" sizes="(min-width: 720px) 720px"></figure><p>If you go to the <strong>Private endpoint connections</strong> tab, you can find the private endpoint which we created.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-4.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy"></figure><h2 id="connect-to-vm-and-test-access">Connect to VM and Test access</h2><p>To connect the virtual machine, run the following command,</p><pre><code class="language-sh">ssh -i &lt;private_key_path&gt; adminuser@&lt;public_ip&gt;</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-5.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy"></figure><p>To check access to the storage account, we need Azure CLI. But by default, it is not installed on the virtual machine. To install it first, using the following commands,</p><pre><code class="language-sh">sudo apt udpate
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash</code></pre><p>check DNS name resolution is working or not. To check that run the following command. you can find FQDN in output values. We are outputting it as <code>dns_a_record</code>. It should be pointing to private IP in <code>endpoint subnet</code></p><pre><code class="language-sh">nslookup demostorage9553.privatelink.blob.core.windows.net</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-8.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy"></figure><!--kg-card-begin: markdown--><blockquote>
<p>I tried creating other record, other than <code>demostorage9553</code> but it wasn&apos;t working. If anybody knows why and give me some resource to understand why it is not working? That will be great. I think mapping is based on record. and also pls tell me if there any way to give endpoint to az cli for storage account.</p>
</blockquote>
<!--kg-card-end: markdown--><p>Now use Azure CLI to check access to blob storage. I added one object manually to <code>demo</code> container. &#xA0;Run the following command to access data from <code>demo</code> container. You can get <code>connection string</code> from outputs.</p><pre><code class="language-sh">az storage blob list \
  --container-name demo
  --connection-string &lt;connection_string&gt;</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/11/image-9.png" class="kg-image" alt="Creating private endpoint for Azure storage account using Terraform" loading="lazy" width="1459" height="458" srcset="https://avabodha.in/content/images/size/w600/2021/11/image-9.png 600w, https://avabodha.in/content/images/size/w1000/2021/11/image-9.png 1000w, https://avabodha.in/content/images/2021/11/image-9.png 1459w" sizes="(min-width: 720px) 720px"></figure><h2 id="destroy-all-resources">Destroy all resources</h2><pre><code class="language-sh">terraform destroy</code></pre><h3 id="references">References</h3><!--kg-card-begin: markdown--><p><a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview?toc=/azure/storage/blobs/toc.json">Storage account overview</a> <br>
<a href="https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview">What is the private endpoint?</a><br>
<a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-private-endpoints?toc=/azure/storage/blobs/toc.json">Use private endpoints for azure storage</a> <br>
<a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt">Install Azure CLI in an ubuntu machine</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Create your first Azure VM instance using Terraform]]></title><description><![CDATA[We will create an Azure VM instance using the IaaC tool terraform. You can connect to that instance using SSH.]]></description><link>https://avabodha.in/create-your-first-azure-virtual-machine-instance-using-terraform/</link><guid isPermaLink="false">61743722ae53a50ed94d2ee2</guid><category><![CDATA[terraform]]></category><category><![CDATA[cloud]]></category><category><![CDATA[azure]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sat, 23 Oct 2021 18:31:14 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/10/terraform-azure.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/10/terraform-azure.png" alt="Create your first Azure VM instance using Terraform"><p>Before jumping into code, you need to understand what is IaaC i.e. Infrastructure as a Code and what are the advantages of using it.</p><!--kg-card-begin: markdown--><blockquote>
<p><strong>According to Wikipedia</strong>, Infrastructure as code is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. <strong><a href="https://en.wikipedia.org/wiki/Infrastructure_as_code">Read more...</a></strong></p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="advantages-of-iaac-%F0%9F%94%A5">Advantages of IaaC &#x1F525;</h3>
<ul>
<li><strong>Speed and simplicity</strong>: by running the simple command you can create/destroy the whole infrastructure.</li>
<li><strong>Configuration consistency</strong>: Every time created infra. will be the same. (if code is same)</li>
<li><strong>Minimization of risk</strong>: Humans tends to do a mistake but computers not.</li>
<li><strong>Increased efficiency</strong>: No need to create infra manually every time.</li>
</ul>
<!--kg-card-end: markdown--><p>So <strong>Terraform</strong> is a tool that is quite cool. In this blog post, we will create an Azure Virtual machine that will reside in the VPC and you can access it using SSH.</p><h2 id="spin-azure-vm">Spin Azure VM</h2><p>you can find all code used in this post at <a href="https://github.com/lets-learn-it/terraform-learning/tree/azure/00-vm-instance">https://github.com/lets-learn-it/terraform-learning/tree/azure/00-vm-instance</a></p><p>We will be using VPC with CIDR <strong>10.0.0.0/16</strong> with 1 subnet with CIDR <strong>10.0.2.0/24</strong>. We are putting our VM instance in the network security group which allows SSH inbound connections and all outbound connections.</p><!--kg-card-begin: html--><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;highlight&quot;:&quot;#0000ff&quot;,&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;url&quot;:&quot;https://drive.google.com/uc?id=1HsoRKX-ECcdgB_2jBcdnfG17afSmysaS&amp;export=download&quot;}"></div>
<script type="text/javascript" src="https://viewer.diagrams.net/embed2.js?&amp;fetch=https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1HsoRKX-ECcdgB_2jBcdnfG17afSmysaS%26export%3Ddownload"></script><!--kg-card-end: html--><h3 id="add-provider-block">Add Provider block</h3><p>Provider block tells terraform which kind of infrastructure we want to create. In our example, we are creating Azure. In terraform block, we can give the plugin version we want to use.</p><pre><code class="language-hcl">provider &quot;azurerm&quot; {
  features {}
}</code></pre><h3 id="create-resource-group">Create Resource Group</h3><p>We will create a resource group at the nearest data center. In my case, India will be the nearest data center. Resource group allows us to create all related resources in a single folder-like structure. <strong>If you delete the resource group, all resources in it will get deleted.</strong></p><pre><code class="language-hcl">resource &quot;azurerm_resource_group&quot; &quot;example&quot; {
  name     = &quot;example-resources&quot;
  location = &quot;Central India&quot;
}</code></pre><h3 id="v-net-and-subnet">V-net and subnet</h3><p>We will create vnet with CIDR <strong>10.0.0.0/16</strong> with 1 subnet with CIDR <strong>10.0.2.0/24</strong>. Our virtual machine will be in this subnet. </p><pre><code class="language-hcl">resource &quot;azurerm_virtual_network&quot; &quot;example&quot; {
  name                = &quot;example-network&quot;
  address_space       = [&quot;10.0.0.0/16&quot;]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource &quot;azurerm_subnet&quot; &quot;example&quot; {
  name                 = &quot;internal&quot;
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = [&quot;10.0.2.0/24&quot;]
}</code></pre><h3 id="public-ip">Public IP</h3><p>To access our virtual machine, we need public IP. public IP can be static or dynamic. Static means after deleting or before creating the resource, IP will be available. but in Dynamic type, it is only available once the resource uses it available. We will be using Dynamic.</p><pre><code class="language-hcl">resource &quot;azurerm_public_ip&quot; &quot;public_ip&quot; {
  name                = &quot;vm_public_ip&quot;
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  allocation_method   = &quot;Dynamic&quot;
}</code></pre><h3 id="network-interface">Network Interface</h3><p>A <em>Network Interface</em> (<em>NIC</em>) is an interconnection between a Virtual Machine and the underlying software network. An Azure Virtual Machine has one or more network interfaces attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it. In our case, we are attaching Dynamic public IP to the network interface.</p><pre><code class="language-hcl">resource &quot;azurerm_network_interface&quot; &quot;example&quot; {
  name                = &quot;example-nic&quot;
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  ip_configuration {
    name                          = &quot;internal&quot;
    subnet_id                     = azurerm_subnet.example.id
    private_ip_address_allocation = &quot;Dynamic&quot;
    public_ip_address_id = azurerm_public_ip.public_ip.id
  }
}</code></pre><h3 id="network-security-group">Network Security Group</h3><p>We are allowing only SSH connections. <strong>The lower the priority number, the higher the priority. </strong>By default, NSG denies all connections.</p><pre><code class="language-hcl">resource &quot;azurerm_network_security_group&quot; &quot;nsg&quot; {
  name                = &quot;ssh_nsg&quot;
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  security_rule {
    name                       = &quot;allow_ssh_sg&quot;
    priority                   = 100
    direction                  = &quot;Inbound&quot;
    access                     = &quot;Allow&quot;
    protocol                   = &quot;Tcp&quot;
    source_port_range          = &quot;*&quot;
    destination_port_range     = &quot;22&quot;
    source_address_prefix      = &quot;*&quot;
    destination_address_prefix = &quot;*&quot;
  }
}</code></pre><h3 id="associate-nsg-with-interface">Associate NSG with interface</h3><p>you can associate NSG with <strong>either subnet or network interface.</strong> In our case, the network interface is allowing access to the virtual machine. We will associate NSG with network interface as follow, </p><pre><code class="language-hcl">resource &quot;azurerm_network_interface_security_group_association&quot; &quot;association&quot; {
  network_interface_id      = azurerm_network_interface.example.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}</code></pre><h3 id="create-virtual-machine">Create Virtual Machine</h3><p><strong>Standard_B1s </strong> is the cheapest VM available. It will cost less than 1 rupee per hour. Attach network interface which we already created. To access that VM, we need to add ssh key also.</p><pre><code class="language-hcl">resource &quot;azurerm_linux_virtual_machine&quot; &quot;example&quot; {
  name                = &quot;example-machine&quot;
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  size                = &quot;Standard_B1s&quot;
  admin_username      = &quot;adminuser&quot;

  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  admin_ssh_key {
    username   = &quot;adminuser&quot;
    public_key = file(&quot;~/.ssh/id_rsa.pub&quot;)
  }

  os_disk {
    caching              = &quot;ReadWrite&quot;
    storage_account_type = &quot;Standard_LRS&quot;
  }</code></pre><h3 id="output">Output</h3><p>We will output public IP so that we can connect to the machine using SSH.</p><pre><code class="language-hcl">output &quot;public_ip&quot; {
  value = azurerm_public_ip.public_ip.ip_address
}</code></pre><hr><h3 id="create-infrastructure-%F0%9F%9B%A0">Create Infrastructure &#x1F6E0;</h3><h4 id="initialize-terraform-plugin">Initialize Terraform Plugin</h4><p>We need to fetch terraform&apos;s Azurerm plugin before we start to create infrastructure. you can initialize the plugin using the following command,</p><pre><code class="language-sh">terraform init</code></pre><h4 id="check-plan">Check Plan</h4><p>before creating AWS EC2, we can check the plan of infrastructure which can tell what resources terraform will create. To check plan, run following command,</p><pre><code class="language-sh">terraform plan</code></pre><h3 id="create-infra">Create Infra</h3><p>Finally, we can create infrastructure. but make sure you are logged in to Azure using the cmd line. Run the following command to create our Azure Virtual machine. When it ask <code>Enter the value:</code> , give it <code>yes</code>.</p><pre><code class="language-sh">terraform apply</code></pre><!--kg-card-begin: markdown--><p>After applying, you may not get public IP as output. So please run, <code>terraform apply</code> again. does anybody have any idea, why it is happening? I reran it.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-5.png" class="kg-image" alt="Create your first Azure VM instance using Terraform" loading="lazy" width="1231" height="378" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-5.png 600w, https://avabodha.in/content/images/size/w1000/2021/10/image-5.png 1000w, https://avabodha.in/content/images/2021/10/image-5.png 1231w" sizes="(min-width: 720px) 720px"></figure><p>you will get public IP as I got <code>20.204.9.163</code>. We will connect to the machine using SSH. &#xA0;Just make sure, you know the passphrase of the SSH key pair.</p><pre><code class="language-sh">ssh -i ~/.ssh/id_rsa adminuser@20.204.9.163</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-6.png" class="kg-image" alt="Create your first Azure VM instance using Terraform" loading="lazy" width="909" height="827" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-6.png 600w, https://avabodha.in/content/images/2021/10/image-6.png 909w" sizes="(min-width: 720px) 720px"></figure><h3 id="azure-resource-visualizer">Azure resource visualizer</h3><p>You can see all resources created in form of a diagram in the Azure portal. Go to the resource group, you will find the resources visualizer option on the left side.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-7.png" class="kg-image" alt="Create your first Azure VM instance using Terraform" loading="lazy" width="901" height="719" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-7.png 600w, https://avabodha.in/content/images/2021/10/image-7.png 901w" sizes="(min-width: 720px) 720px"></figure><h3 id="destroy-infra">Destroy Infra</h3><p>After using Azure VM, we want to destroy it to save money. Terraform can do that with a single command. It will ask you again for confirmation, give it <code>yes</code></p><pre><code class="language-sh">terraform destroy</code></pre>]]></content:encoded></item><item><title><![CDATA[Create Your first AWS EC2 instance using Terraform]]></title><description><![CDATA[We will create an AWS EC2 instance using the IaaC tool terraform. You can connect to that instance using SSH.]]></description><link>https://avabodha.in/create-your-first-aws-ec2-instance-using-terraform/</link><guid isPermaLink="false">616ac4f7ae53a50ed94d2dd3</guid><category><![CDATA[terraform]]></category><category><![CDATA[cloud]]></category><category><![CDATA[aws]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sat, 16 Oct 2021 13:30:09 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/10/ec2-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/10/ec2-.jpg" alt="Create Your first AWS EC2 instance using Terraform"><p>Before jumping into code, you need to understand what is IaaC i.e. Infrastructure as a Code and what are the advantages of using it.</p><!--kg-card-begin: markdown--><blockquote>
<p><strong>According to Wikipedia</strong>, Infrastructure as code is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. <strong><a href="https://en.wikipedia.org/wiki/Infrastructure_as_code">Read more...</a></strong></p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="advantages-of-iaac-%F0%9F%94%A5">Advantages of IaaC &#x1F525;</h3>
<ul>
<li><strong>Speed and simplicity</strong>: by running the simple command you can create/destroy the whole infrastructure.</li>
<li><strong>Configuration consistency</strong>: Every time created infra. will be the same. (if code is same)</li>
<li><strong>Minimization of risk</strong>: Humans tends to do a mistake but computers not.</li>
<li><strong>Increased efficiency</strong>: No need to create infra manually every time.</li>
</ul>
<!--kg-card-end: markdown--><p>So <strong>Terraform</strong> is a tool that is quite cool. In this blog post, we will create an EC2 machine that will reside in the default VPC and you can access it using SSH.</p><h2 id="spin-ec2-instance">Spin EC2 Instance</h2><!--kg-card-begin: markdown--><blockquote>
<p><strong>According to Wikipedia</strong>, Amazon Elastic Compute Cloud (EC2) is a part of Amazon.com&apos;s cloud-computing platform, Amazon Web Services (AWS), that allows users to rent virtual computers on which to run their own computer applications.</p>
</blockquote>
<!--kg-card-end: markdown--><p>you can find all code in this post &#xA0;<a href="https://github.com/lets-learn-it/terraform-learning/tree/aws/00-ec2-instance">https://github.com/lets-learn-it/terraform-learning/tree/aws/00-ec2-instance</a></p><p>We will be using default VPC with CIDR <strong>172.31.0.0/16</strong> with 3 subnets. As you can see, we are using the default route table and Internet gateway for sake of simplicity. We are putting our EC2 instance in the security group which allows SSH inbound connections and all outbound connections.</p><!--kg-card-begin: html--><div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{&quot;highlight&quot;:&quot;#0000ff&quot;,&quot;nav&quot;:true,&quot;resize&quot;:true,&quot;toolbar&quot;:&quot;zoom layers tags lightbox&quot;,&quot;edit&quot;:&quot;_blank&quot;,&quot;url&quot;:&quot;https://drive.google.com/uc?id=1-5mAEqljInFBzv7rIashFi3yxQBYVknl&amp;export=download&quot;}"></div>
<script type="text/javascript" src="https://viewer.diagrams.net/embed2.js?&amp;fetch=https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1-5mAEqljInFBzv7rIashFi3yxQBYVknl%26export%3Ddownload"></script><!--kg-card-end: html--><h3 id="add-provider-block">Add Provider block</h3><p>Provider block tells terraform which kind of infrastructure we want to create. In our example, we are creating AWS. In terraform block, we can give the plugin version we want to use.</p><pre><code class="language-hcl">provider &quot;aws&quot; {
  # I am using Mumbai region
  region = &quot;ap-south-1&quot;
}

terraform {
  required_providers {
    aws = {
      source  = &quot;hashicorp/aws&quot;
      version = &quot;~&gt; 3.37.0&quot;
    }
  }
}</code></pre><h3 id="create-key-%F0%9F%97%9D">Create key &#x1F5DD;</h3><p>To connect EC2 instances using SSH, we need ssh keys. Let&apos;s create these first. Run the following command, and type passphrase when it asks</p><pre><code class="language-sh">ssh-keygen -t rsa -f ~/.ssh/ec2 </code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image.png" class="kg-image" alt="Create Your first AWS EC2 instance using Terraform" loading="lazy" width="819" height="419" srcset="https://avabodha.in/content/images/size/w600/2021/10/image.png 600w, https://avabodha.in/content/images/2021/10/image.png 819w" sizes="(min-width: 720px) 720px"></figure><pre><code class="language-hcl">resource &quot;aws_key_pair&quot; &quot;key&quot; {
  key_name   = &quot;parikshits_key&quot;
  public_key = file(&quot;~/.ssh/ec2.pub&quot;)
}
</code></pre><h3 id="use-default-vpc">Use default VPC</h3><p>AWS creates default VPC in each region. For this example, we will use default VPC.</p><pre><code class="language-hcl">resource &quot;aws_default_vpc&quot; &quot;default_vpc&quot; {

}</code></pre><h3 id="create-security-group-%F0%9F%9B%A1">Create Security Group &#x1F6E1;</h3><p>We want to connect to our EC2 machine using SSH, So we need to allow ssh traffic to our machine from anywhere.</p><pre><code class="language-hcl">resource &quot;aws_security_group&quot; &quot;allow_ssh&quot; {
  name        = &quot;allow_ssh&quot;
  description = &quot;Allow ssh inbound traffic&quot;
  
  # using default VPC
  vpc_id      = aws_default_vpc.default_vpc.id

  ingress {
    description = &quot;TLS from VPC&quot;
    
    # we should allow incoming and outoging
    # TCP packets
    from_port   = 22
    to_port     = 22
    protocol    = &quot;tcp&quot;
    
    # allow all traffic
    cidr_blocks = [&quot;0.0.0.0/0&quot;]
  }

  tags = {
    Name = &quot;allow_ssh&quot;
  }
}</code></pre><h3 id="create-ec2-instance">Create EC2 Instance</h3><p>We will create <code>t2.micro</code> instance (free tier) in this example.</p><pre><code class="language-hcl">resource &quot;aws_instance&quot; &quot;my_ec2&quot; {
  ami             = var.ami_id
  instance_type   = &quot;t2.micro&quot;
  
  # refering key which we created earlier
  key_name        = aws_key_pair.key.key_name
  
  # refering security group created earlier
  security_groups = [aws_security_group.allow_ssh.name]

  tags = var.tags
}</code></pre><h3 id="add-variables">Add variables</h3><p>We are using <code>var.xxx</code>, which are variables used in our code. We need to define these variables.</p><pre><code class="language-hcl">variable &quot;ami_id&quot; {
  description = &quot;this is ubuntu ami id&quot;
  
  # I am using amazon linux image
  default     = &quot;ami-0a23ccb2cdd9286bb&quot;
}

variable &quot;tags&quot; {
  type = map(string)
  default = {
    &quot;name&quot; = &quot;parikshit&apos;s ec2&quot;
  }
}</code></pre><h3 id="output-useful-info">Output useful info</h3><p>Terraform can output attributes of resources i.e. <code>public_ip</code> of EC2 instance. We need public IP to connect instances using SSH.</p><pre><code class="language-hcl">output &quot;arn&quot; {
  value = aws_instance.my_ec2.arn
}

output &quot;public_ip&quot; {
  value = aws_instance.my_ec2.public_ip
}</code></pre><hr><h3 id="create-infrastructure-%F0%9F%9B%A0">Create Infrastructure &#x1F6E0;</h3><h4 id="initialize-terraform-plugin">Initialize Terraform Plugin</h4><p>We need to fetch terraform&apos;s AWS plugin before we start to create infrastructure. you can initialize the plugin using the following command,</p><pre><code class="language-sh">terraform init</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-1.png" class="kg-image" alt="Create Your first AWS EC2 instance using Terraform" loading="lazy" width="986" height="456" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-1.png 600w, https://avabodha.in/content/images/2021/10/image-1.png 986w" sizes="(min-width: 720px) 720px"></figure><h4 id="check-plan">Check Plan</h4><p>before creating AWS EC2, we can check the plan of infrastructure which can tell what resources terraform will create. To check plan, run following command, </p><pre><code class="language-sh">terraform plan</code></pre><!--kg-card-begin: html--><div class="output">
    <pre>

Terraform used the selected providers to generate the following execution plan. 
Resource actions are indicated with the following symbols:
  <span class="green">+</span> create

Terraform will perform the following actions:

  # aws_default_vpc.default_vpc will be created
  <span class="green">+</span> resource &quot;aws_default_vpc&quot; &quot;default_vpc&quot; {
      <span class="green">+</span> arn                              = (known after apply)
      <span class="green">+</span> assign_generated_ipv6_cidr_block = (known after apply)
      <span class="green">+</span> cidr_block                       = (known after apply)
      <span class="green">+</span> default_network_acl_id           = (known after apply)
      <span class="green">+</span> default_route_table_id           = (known after apply)
      <span class="green">+</span> default_security_group_id        = (known after apply)
      <span class="green">+</span> dhcp_options_id                  = (known after apply)
      <span class="green">+</span> enable_classiclink               = (known after apply)
      <span class="green"><span class="green">+</span></span> enable_classiclink_dns_support   = (known after apply)
      <span class="green">+</span> enable_dns_hostnames             = (known after apply)
      <span class="green">+</span> enable_dns_support               = true
      <span class="green">+</span> id                               = (known after apply)
      <span class="green">+</span> instance_tenancy                 = (known after apply)
      <span class="green">+</span> ipv6_association_id              = (known after apply)
      <span class="green">+</span> ipv6_cidr_block                  = (known after apply)
      <span class="green">+</span> main_route_table_id              = (known after apply)
      <span class="green">+</span> owner_id                         = (known after apply)
      <span class="green">+</span> tags_all                         = (known after apply)
    }

  # aws_instance.my_ec2 will be created
  <span class="green">+</span> resource &quot;aws_instance&quot; &quot;my_ec2&quot; {
      <span class="green">+</span> ami                          = &quot;ami-0a23ccb2cdd9286bb&quot;
      <span class="green">+</span> arn                          = (known after apply)
      <span class="green">+</span> associate_public_ip_address  = (known after apply)
      <span class="green">+</span> availability_zone            = (known after apply)
      <span class="green">+</span> cpu_core_count               = (known after apply)
      <span class="green">+</span> cpu_threads_per_core         = (known after apply)
      <span class="green">+</span> get_password_data            = false
      <span class="green">+</span> host_id                      = (known after apply)
      <span class="green">+</span> id                           = (known after apply)
      <span class="green">+</span> instance_state               = (known after apply)
      <span class="green">+</span> instance_type                = &quot;t2.micro&quot;
      <span class="green">+</span> ipv6_address_count           = (known after apply)
      <span class="green">+</span> ipv6_addresses               = (known after apply)
      <span class="green">+</span> key_name                     = &quot;parikshits_key&quot;
      <span class="green">+</span> outpost_arn                  = (known after apply)
      <span class="green">+</span> password_data                = (known after apply)
      <span class="green">+</span> placement_group              = (known after apply)
      <span class="green">+</span> primary_network_interface_id = (known after apply)
      <span class="green">+</span> private_dns                  = (known after apply)
      <span class="green">+</span> private_ip                   = (known after apply)
      <span class="green">+</span> public_dns                   = (known after apply)
      <span class="green">+</span> public_ip                    = (known after apply)
      <span class="green">+</span> secondary_private_ips        = (known after apply)
      <span class="green">+</span> security_groups              = [
          <span class="green">+</span> &quot;allow_ssh&quot;,
        ]
      <span class="green">+</span> source_dest_check            = true
      <span class="green">+</span> subnet_id                    = (known after apply)
      <span class="green">+</span> tags                         = {
          <span class="green">+</span> &quot;name&quot; = &quot;parikshit&apos;s ec2&quot;
        }
      <span class="green">+</span> tenancy                      = (known after apply)
      <span class="green">+</span> vpc_security_group_ids       = (known after apply)

      <span class="green">+</span> ebs_block_device {
          <span class="green">+</span> delete_on_termination = (known after apply)
          <span class="green">+</span> device_name           = (known after apply)
          <span class="green">+</span> encrypted             = (known after apply)
          <span class="green">+</span> iops                  = (known after apply)
          <span class="green">+</span> kms_key_id            = (known after apply)
          <span class="green">+</span> snapshot_id           = (known after apply)
          <span class="green">+</span> tags                  = (known after apply)
          <span class="green">+</span> throughput            = (known after apply)
          <span class="green">+</span> volume_id             = (known after apply)
          <span class="green">+</span> volume_size           = (known after apply)
          <span class="green">+</span> volume_type           = (known after apply)
        }

      <span class="green">+</span> enclave_options {
          <span class="green">+</span> enabled = (known after apply)
        }

      <span class="green">+</span> ephemeral_block_device {
          <span class="green">+</span> device_name  = (known after apply)
          <span class="green">+</span> no_device    = (known after apply)
          <span class="green">+</span> virtual_name = (known after apply)
        }

      <span class="green">+</span> metadata_options {
          <span class="green">+</span> http_endpoint               = (known after apply)
          <span class="green">+</span> http_put_response_hop_limit = (known after apply)
          <span class="green">+</span> http_tokens                 = (known after apply)
        }

      <span class="green">+</span> network_interface {
          <span class="green">+</span> delete_on_termination = (known after apply)
          <span class="green">+</span> device_index          = (known after apply)
          <span class="green">+</span> network_interface_id  = (known after apply)
        }

      <span class="green">+</span> root_block_device {
          <span class="green">+</span> delete_on_termination = (known after apply)
          <span class="green">+</span> device_name           = (known after apply)
          <span class="green">+</span> encrypted             = (known after apply)
          <span class="green">+</span> iops                  = (known after apply)
          <span class="green">+</span> kms_key_id            = (known after apply)
          <span class="green">+</span> tags                  = (known after apply)
          <span class="green">+</span> throughput            = (known after apply)
          <span class="green">+</span> volume_id             = (known after apply)
          <span class="green">+</span> volume_size           = (known after apply)
          <span class="green">+</span> volume_type           = (known after apply)
        }
    }

  # aws_key_pair.key will be created
  <span class="green">+</span> resource &quot;aws_key_pair&quot; &quot;key&quot; {
      <span class="green">+</span> arn         = (known after apply)
      <span class="green">+</span> fingerprint = (known after apply)
      <span class="green">+</span> id          = (known after apply)
      <span class="green">+</span> key_name    = &quot;parikshits_key&quot;
      <span class="green">+</span> key_pair_id = (known after apply)
      <span class="green">+</span> public_key  = &quot;ssh-rsa <public key>&quot;
    }

  # aws_security_group.allow_ssh will be created
  <span class="green">+</span> resource &quot;aws_security_group&quot; &quot;allow_ssh&quot; {
      <span class="green">+</span> arn                    = (known after apply)
      <span class="green">+</span> description            = &quot;Allow ssh inbound traffic&quot;
      <span class="green">+</span> egress                 = (known after apply)
      <span class="green">+</span> id                     = (known after apply)
      <span class="green">+</span> ingress                = [
          <span class="green">+</span> {
              <span class="green">+</span> cidr_blocks      = [
                  <span class="green">+</span> &quot;0.0.0.0/0&quot;,
                ]
              <span class="green">+</span> description      = &quot;TLS from VPC&quot;
              <span class="green">+</span> from_port        = 22
              <span class="green">+</span> ipv6_cidr_blocks = []
              <span class="green">+</span> prefix_list_ids  = []
              <span class="green">+</span> protocol         = &quot;tcp&quot;
              <span class="green">+</span> security_groups  = []
              <span class="green">+</span> self             = false
              <span class="green">+</span> to_port          = 22
            },
        ]
      <span class="green">+</span> name                   = &quot;allow_ssh&quot;
      <span class="green">+</span> name_prefix            = (known after apply)
      <span class="green">+</span> owner_id               = (known after apply)
      <span class="green">+</span> revoke_rules_on_delete = false
      <span class="green">+</span> tags                   = {
          <span class="green">+</span> &quot;Name&quot; = &quot;allow_ssh&quot;
        }
      <span class="green">+</span> vpc_id                 = (known after apply)
    }

Plan: <span class="green">4 to add</span>, 0 to change, <span class="red">0 to destroy</span>.

Changes to Outputs:
  <span class="green">+</span> arn       = (known after apply)
  <span class="green">+</span> public_ip = (known after apply)

&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;

Note: You didn&apos;t use the -out option to save this plan, 
so Terraform can&apos;t guarantee to take exactly these actions 
if you run &quot;terraform apply&quot; now.
</public></pre>
</div><!--kg-card-end: html--><h3 id="create-infra">Create Infra</h3><p>Finally, we can create infrastructure. but make sure you are logged in to AWS using the cmd line. Run the following command to create our AWS EC2 machine. When it ask <code>Enter the value:</code> , give it <code>yes</code>.</p><pre><code class="language-sh">terraform apply</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-2.png" class="kg-image" alt="Create Your first AWS EC2 instance using Terraform" loading="lazy" width="863" height="400" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-2.png 600w, https://avabodha.in/content/images/2021/10/image-2.png 863w" sizes="(min-width: 720px) 720px"></figure><p>you will get public IP as I got <code>13.232.255.15</code>. We will connect to the machine using SSH. </p><pre><code class="language-sh">ssh -i ~/.ssh/ec2 ec2-user@13.232.255.15</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-3.png" class="kg-image" alt="Create Your first AWS EC2 instance using Terraform" loading="lazy" width="861" height="279" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-3.png 600w, https://avabodha.in/content/images/2021/10/image-3.png 861w" sizes="(min-width: 720px) 720px"></figure><h3 id="destroy-infra">Destroy Infra</h3><p>After using AWS EC2, we want to destroy it to save money. Terraform can do that with a single command. It will ask you again for confirmation, give it <code>yes</code></p><pre><code class="language-sh">terraform destroy</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/10/image-4.png" class="kg-image" alt="Create Your first AWS EC2 instance using Terraform" loading="lazy" width="831" height="300" srcset="https://avabodha.in/content/images/size/w600/2021/10/image-4.png 600w, https://avabodha.in/content/images/2021/10/image-4.png 831w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Install Driver for ALFA AWUS036ACS on Linux]]></title><description><![CDATA[ALFA AWUS036ACS is the cheapest USB Wireless Adapter available in the market which supports dual-band 2.4 and 5Ghz. It supports both monitor mode and packet injection mode.]]></description><link>https://avabodha.in/install-driver-for-alfa-awus036acs-on-linux/</link><guid isPermaLink="false">611f1cfeae53a50ed94d2caa</guid><category><![CDATA[networking]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Sat, 21 Aug 2021 05:13:24 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/08/rs-w-453-h-210.webp" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/08/rs-w-453-h-210.webp" alt="Install Driver for ALFA AWUS036ACS on Linux"><p>ALFA AWUS036ACS is the cheapest USB Wireless Adapter available in the market which supports dual-band 2.4 and 5Ghz. It supports both monitor mode and packet injection mode.</p><p>I recently bought this in India from alfa&apos;s official site. <a href="https://alfanetwork.co.in/shop/ols/products/alfa-awus036acs">https://alfanetwork.co.in/shop/ols/products/alfa-awus036acs</a>. Which uses the Realtek RTL8811AU chipset. I didn&apos;t find any resources online for installing drivers for this chipset. I searched the whole day and finally got success. &#x1F603;</p><h2 id="features-of-alfa-awus036acs">Features of ALFA AWUS036ACS</h2><ul><li>Support monitor and packet injection mode.</li><li>Support both 2.4 (Up to 150Mbps) and 5 GHz (Up to 433Mbps) frequencies.</li><li>Comes with a 1 x dual-band detachable RP-SMA connector.</li><li>Small in size (18 x 45 x 9 mm)</li></ul><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://avabodha.in/content/images/2021/08/alfa1.webp" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="480" height="480"><figcaption>ALFA AWUS036ACS with box</figcaption></figure><h2 id="driver-installation">Driver Installation</h2><h3 id="check-usb-adapter-available">Check USB Adapter available</h3><pre><code class="language-shell">lsusb</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/scrre.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="918" height="118" srcset="https://avabodha.in/content/images/size/w600/2021/08/scrre.png 600w, https://avabodha.in/content/images/2021/08/scrre.png 918w" sizes="(min-width: 720px) 720px"></figure><h3 id="download-driver-source-and-build">Download driver source and build</h3><p>We will use an open-source <strong><a href="https://github.com/aircrack-ng/rtl8812au">rtl8812au</a> </strong>driver from aircrack-ng.</p><pre><code class="language-shell">git clone https://github.com/aircrack-ng/rtl8812au.git</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/image-1.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="840" height="183" srcset="https://avabodha.in/content/images/size/w600/2021/08/image-1.png 600w, https://avabodha.in/content/images/2021/08/image-1.png 840w" sizes="(min-width: 720px) 720px"></figure><p>Now you can go in <code>rtl8812au</code> directory.</p><h4 id="build-source-and-install">Build Source and install</h4><pre><code class="language-shell">sudo make</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-09-50-49.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="877" height="367" srcset="https://avabodha.in/content/images/size/w600/2021/08/Screenshot-from-2021-08-21-09-50-49.png 600w, https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-09-50-49.png 877w" sizes="(min-width: 720px) 720px"></figure><pre><code class="language-shell">sudo make install</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/image-2.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="889" height="92" srcset="https://avabodha.in/content/images/size/w600/2021/08/image-2.png 600w, https://avabodha.in/content/images/2021/08/image-2.png 889w" sizes="(min-width: 720px) 720px"></figure><h4 id="troubleshooting">Troubleshooting</h4><p>Sometimes you may get an error saying file or directory not found.</p><pre><code class="language-shell">make[1]: *** /lib/modules/5.10.0-kali3-amd64/build: No such file or directory. 
Stop</code></pre><p><code>/lib/modules/****/build</code> where **** can be anything depending on your OS.</p><p>For solving this problem, run the following commands and try again</p><pre><code class="language-shell">sudo apt update
sudo apt upgrade
sudo apt dist-upgrade -y</code></pre><p>After running these commands, restart the computer and try to install the driver again.</p><p>While <code>sudo make</code>, you may get errors like below</p><pre><code class="language-shell">make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.16.0-kali7-amd64/build M=/home/sandbox/github/rtl8812au  modules
make[1]: *** /lib/modules/5.16.0-kali7-amd64/build: No such file or directory.  Stop.
make: *** [Makefile:2244: modules] Error 2</code></pre><p>To resolve it, run the below command </p><pre><code class="language-shell">sudo apt-get install linux-headers-$(uname -r)</code></pre><h2 id="checking-adapter">Checking Adapter</h2><h3 id="find-interface">Find Interface</h3><pre><code class="language-shell">iwconfig</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/image-3.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="882" height="303" srcset="https://avabodha.in/content/images/size/w600/2021/08/image-3.png 600w, https://avabodha.in/content/images/2021/08/image-3.png 882w" sizes="(min-width: 720px) 720px"></figure><h3 id="get-info-about-the-adapter">Get Info about the adapter</h3><pre><code class="language-shell">iw list</code></pre><h3 id="put-adapter-in-monitor-mode">Put Adapter in Monitor mode</h3><pre><code class="language-shell"># Turn off interface
sudo ifconfig wlx00c0caadd40c down

# Change mode
sudo iw wlx00c0caadd40c set monitor control

# Turn on interface
sudo ifconfig wlx00c0caadd40c up

# check mode
iwconfig</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-10-31-08.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="959" height="400" srcset="https://avabodha.in/content/images/size/w600/2021/08/Screenshot-from-2021-08-21-10-31-08.png 600w, https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-10-31-08.png 959w" sizes="(min-width: 720px) 720px"></figure><h3 id="try-packet-injection">Try Packet Injection</h3><p>For performing packet injection, you need to install <code>aircrack-ng</code></p><pre><code class="language-shell">sudo apt install aircrack-ng</code></pre><p>For checking packet injection is working or not, connect to wifi and put the adapter in monitor mode then run the following command</p><pre><code class="language-shell"># sudo aireplay-ng --test &lt;interface&gt;
sudo aireplay-ng --test wlx00c0caadd40c</code></pre><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-10-38-46.png" class="kg-image" alt="Install Driver for ALFA AWUS036ACS on Linux" loading="lazy" width="793" height="210" srcset="https://avabodha.in/content/images/size/w600/2021/08/Screenshot-from-2021-08-21-10-38-46.png 600w, https://avabodha.in/content/images/2021/08/Screenshot-from-2021-08-21-10-38-46.png 793w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Deep Neural Network with MNIST Digit Recognition Example from Scratch]]></title><description><![CDATA[Multiclass classification using deep neural network in numpy only. Also MNIST handwritten digit classification in numpy.]]></description><link>https://avabodha.in/deep-neural-network-with-mnist-digit-recognition-example-from-scratch/</link><guid isPermaLink="false">60fac7d5f45da2053a90d934</guid><category><![CDATA[neural-network]]></category><dc:creator><![CDATA[Parikshit Patil]]></dc:creator><pubDate>Fri, 23 Jul 2021 14:08:09 GMT</pubDate><media:content url="https://avabodha.in/content/images/2021/07/net2.png" medium="image"/><content:encoded><![CDATA[<img src="https://avabodha.in/content/images/2021/07/net2.png" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch"><p>This will be the \(3^{rd}\) and last article in <code>neural network and deep learning</code> the series. We will see multiclass classification in this article. Before that, check previous articles first and return to these articles. Below are the links to these previous articles.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://avabodha.in/deep-l-layer-neural-network-from-scratch-in-python/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Deep L-layer Neural Network from Scratch in Python</div><div class="kg-bookmark-description">Basics of feed-forward neural networks. Generic implementation for L layer deep neural network in NumPy.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://avabodha.in/favicon.ico" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch"><span class="kg-bookmark-author">Avabodha</span><span class="kg-bookmark-publisher">Parikshit Patil</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://avabodha.in/content/images/2021/07/net.png" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch"></div></a></figure><p>Still now we only saw binary classification problems (either 1 or 0). In this article, We will see multiclass (means one of the many classes) problem. Let&apos;s see directly what is MNIST handwritten digit dataset is. It contains 60000 images of handwritten digits (0-9). Total 10 classes &#x1F644; i.e. multiclass classification. <strong>all images are grayscale and 28x28 in size.</strong> You can check some samples in the below image.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://avabodha.in/content/images/2021/07/image-33.png" class="kg-image" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch" loading="lazy" width="638" height="273" srcset="https://avabodha.in/content/images/size/w600/2021/07/image-33.png 600w, https://avabodha.in/content/images/2021/07/image-33.png 638w"><figcaption><em>Samples from MNIST Dataset</em></figcaption></figure><p>Before that take simple network with multiclass output. 4 class neural network is shown in below image. Here, shape of \(\hat{y}\) will be (4, m) where m is number of examples. Generally shape of \(\hat{y}\) is (K, m) where K is number of classes.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://avabodha.in/content/images/2021/07/image-34.png" class="kg-image" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch" loading="lazy" width="792" height="399" srcset="https://avabodha.in/content/images/size/w600/2021/07/image-34.png 600w, https://avabodha.in/content/images/2021/07/image-34.png 792w" sizes="(min-width: 720px) 720px"><figcaption><em>Multiclass Network Sample Example</em></figcaption></figure><p>Lets revise some terminologies. \(m\) will be number of samples in dataset and \(n_x\) will be size of input then \((n_x, m)\) will be the input matrix size. And \(n_y = K\) will be output size then \((n_y, m)\) will be the output matrix size. As per our above neural network, label will be one of the following column vector because only 1 out of 4 class will be present for a sample in our example. for our convenience, suppose cat, dog, frog and rabbit these are the classes. So \(\begin{bmatrix} 1\ 0\ 0\ 0\ \end{bmatrix}\) is the cat class. This kind of encoding is called as <code>One Hot Encoding</code> because for some particular sample only one value is hot (1) and other are cool (0).</p><!--kg-card-begin: html--><div style="width:100%;overflow-x: auto">

$$
  Y \in
\begin{bmatrix}
1 \\
0 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
0 \\
1 \\
0 \\
0
\end{bmatrix},
\begin{bmatrix}
0 \\
0 \\
1 \\
0
\end{bmatrix},
\begin{bmatrix}
0 \\
0 \\
0 \\
1
\end{bmatrix}
$$

</div><!--kg-card-end: html--><h2 id="initialize-variables">Initialize Variables</h2><p>In general, our data will look like the following matrices. \(X\) is input and \(Y\) is output. in below equation, \(K\) is number of classes and we use \(K\) and \(n_y\) interchangebly.</p><!--kg-card-begin: html--><div style="width:100%;overflow-x: auto">

$$
  X =
\begin{bmatrix}
x_{11} &amp; x_{21} &amp; \dots &amp; x_{m1}\\
\vdots &amp; \vdots &amp; \dots &amp; \vdots\\
x_{1n_x} &amp; x_{2n_x} &amp; \dots &amp; x_{mn_x}
\end{bmatrix}
; Y =
\begin{bmatrix}
y_{11} &amp; y_{21} &amp; \dots &amp; y_{m1}\\
\vdots &amp; \vdots &amp; \dots &amp; \vdots\\
y_{1K} &amp; y_{2K} &amp; \dots &amp; y_{mK}
\end{bmatrix}
$$

</div><!--kg-card-end: html--><p>Before we jump into forward pass, lets create dataset. We will create random data and at last check whether cost is decreasing or not. <strong>If cost is decreasing after each epoch then we can say that our model is working fine</strong>.</p><pre><code class="language-python">import numpy as np

# set seed
np.random.seed(95)

# number of examples in dataset
m = 6
# number of classes
K = 4
# input shape
n_x = 3

# input (n_x, m)
X = np.random.rand(n_x, m)

# hypothetical random labels
labels = np.random.randint(K, size=m)

# convert to one hot encoded
Y = np.zeros((4,6))
for i in range(m):
  Y[labels[i]][i] = 1

print(&quot;X&quot;, X)
print(&quot;Labels&quot;,labels)
print(&quot;Y&quot;,Y)
</code></pre><p><strong>Output</strong></p><!--kg-card-begin: html--><div>
<pre class="output">
X:
 [[0.22880349 0.19068802 0.88635967 0.7189259  0.53298338 0.8694621 ]
 [0.72423768 0.48208699 0.7560772  0.97473999 0.5083671  0.95849135]
 [0.49426336 0.51716733 0.34406231 0.96975023 0.25608847 0.40327522]]
Labels:  [2 1 3 0 0 3]
Y:
 [[0. 0. 0. 1. 1. 0.]
 [0. 1. 0. 0. 0. 0.]
 [1. 0. 0. 0. 0. 0.]
 [0. 0. 1. 0. 0. 1.]]
 </pre>
</div><!--kg-card-end: html--><p>Now time to randomly intialize weights and biases to 0. We will create <code>units</code> list which tells us, <strong>How may neurons in each layers?</strong></p><pre><code class="language-python"># units in each layer
# first number (3) is input dimension
units = [3, 5, 5, 4]

# Total layers except input
L = len(units) - 1

# parameter dictionary
parameters = dict()

for layer in range(1, L+1):
    parameters[&apos;W&apos; + str(layer)] =   np.random.rand(units[layer],units[layer-1])
    parameters[&apos;b&apos; + str(layer)] = np.zeros((units[layer],1))
</code></pre><h2 id="forward-pass">Forward Pass</h2><p>We saw in forward pass in previous article and no change is required for feed forward. So let&apos;s copy equations from previous article. Normally people uses <strong>Softmax</strong> activation function at last layer but for simplicity, we will continue using <strong>Sigmoid</strong>.</p><p>\[<br>\begin{align*}<br>Z^{[l]} &amp;= W^{[l]}.a^{[l-1]} + b^{[l]} \\<br>a^{[l]} &amp;= g^{[l]}{(Z^{[l]})}<br>\end{align*}<br>\]</p><p>Lets write code for forward pass in python. We will use our generic code from previous article.</p><pre><code class="language-python">cache = dict()
cache[&apos;a0&apos;] = X

for layer in range(1, L+1):
    cache[&apos;Z&apos; + str(layer)] = np.dot(parameters[&apos;W&apos; + str(layer)],cache[&apos;a&apos; + str(layer-1)]) 
    						+ parameters[&apos;b&apos; + str(layer)]
    cache[&apos;a&apos; + str(layer)] = sigmoid(cache[&apos;Z&apos; + str(layer)])
</code></pre><h2 id="backward-pass">Backward Pass</h2><p>Still what we saw is from previous article. Now the most important part i.e. <strong>Loss Function &amp; Cost Function</strong>. We already saw difference between loss function and cost function in article 1 of this series. We will use Binary Cross Entropy generalized for \(K\) classes as our loss function. The formula for it is same as <strong>Binary Cross Entropy</strong> extended for multiple classes. Below, second equation is vectorized version of first equation.</p><!--kg-card-begin: html--><div style="width:100%;overflow-x: auto">

$$
\begin{align*}
L(\hat{y},y) &amp;= - \frac{1}{K} \sum \limits_{i=1}^K [y_i\log{\hat{(y_i)}} + (1-y_i)\log{(1-\hat{y_i})}] \\
&amp;= - [y\log{\hat{(y)}} + (1-y)\log{(1-\hat{y})}]
\end{align*}
$$

</div><!--kg-card-end: html--><p>So the cost function will be the addition of all losses over all examples. So cost function will be</p><!--kg-card-begin: html--><div style="width:100%;overflow-x: auto">

$$
\begin{align*}
J(W,b) &amp;=  \frac{1}{m} \sum \limits_{i=1}^m L(\hat{y}^{i},y^{i}) \\
 &amp;= - \frac{1}{K} \frac{1}{m} \sum \limits_{j=1}^m \sum \limits_{i=1}^K [y_i^j\log{\hat{(y_i^j)}} + (1-y_i^j)\log{(1-\hat{y_i^j})}] \\
 &amp;= -  \frac{1}{m} \sum \limits_{j=1}^m  [y^j\log{\hat{(y^j)}} + (1-y^j)\log{(1-\hat{y^j})}]
\end{align*}
$$

</div><!--kg-card-end: html--><p>Now its time of finding derivative of loss function. Before that check below image so that you can compare binary classification problem with multiclass classification. When we find \(dZ^L\), we are done. As per below image, finding \(dZ^L\) is tricky part and all gradient in layers before it can be found using our regular (generic) code. Check previous article.</p><figure class="kg-card kg-image-card"><img src="https://avabodha.in/content/images/2021/07/image-35.png" class="kg-image" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch" loading="lazy" width="1500" height="644" srcset="https://avabodha.in/content/images/size/w600/2021/07/image-35.png 600w, https://avabodha.in/content/images/size/w1000/2021/07/image-35.png 1000w, https://avabodha.in/content/images/2021/07/image-35.png 1500w" sizes="(min-width: 720px) 720px"></figure><p>In below equations, \(W\) &amp; \(b\) are representing all weights and biases of network.</p><!--kg-card-begin: html--><div style="width:100%;overflow-x: auto">

$$
\begin{align*}
dZ^L &amp;= \frac{\partial{J(W,b)}}{\partial{Z^L}} \\
 &amp;= \frac{\partial}{\partial{Z^L}}[-  \frac{1}{m} \sum \limits_{j=1}^m  [y^j\log{\hat{(y^j)}} + (1-y^j)\log{(1-\hat{y^j})}]] \\
&amp;= -  \frac{1}{m} \sum \limits_{j=1}^m [\frac{\partial}{\partial{Z^L}} [y^j\log{\hat{(y^j)}} + (1-y^j)\log{(1-\hat{y^j})}]] \\
&amp;=-  \frac{1}{m} \sum \limits_{j=1}^m (y^i - \hat{y^i}) \\
&amp;= \frac{1}{m} \sum \limits_{j=1}^m ( \hat{y^i} - y^i)
\end{align*}
$$

</div><!--kg-card-end: html--><p>Above equation can further vectorized for speedup. I will start another series on <strong>High Performance Computing</strong> as early as possible. In that article, we will see advantages of vectorization.</p><p>\[<br>\begin{align*}<br>dZ^L &amp;= \frac{1}{m} (\hat{y}-y)<br>\end{align*}<br>\]</p><p>Its code time &#x1F607;. Let&apos;s implement backpropagation. You may say that both codes from previous article and this article are same. it&apos;s because vectorized equation for \(dZ^L\) is same for both binary classification and multiclass classification. Remember our example with random data doesn&apos;t make any sense because it don&apos;t have any pattern. We care about <code>cost is decreasing or not?</code>. You can find colab notebook in <a>references and code</a> section. Just care about cost decreasing or not.</p><pre><code class="language-python">def cost(y,y_hat):
  return -np.sum(y*np.log(y_hat) + (1-y)*(np.log(1-y_hat)))

y_hat = cache[&apos;a&apos; + str(L)]
cost_ = cost(y,y_hat)
cache[&apos;dZ&apos; + str(L)] = (1/m) * (y_hat - Y)
cache[&apos;dW&apos; + str(L)] = np.dot(cache[&apos;dZ&apos; + str(L)], cache[&apos;a&apos; + str(L-1)].T)
cache[&apos;db&apos; + str(L)] = np.sum(cache[&apos;dZ&apos; + str(L)], axis=1, keepdims=True)

for layer in range(L-1,0,-1):
  cache[&apos;dZ&apos; + str(layer)] = np.dot(parameters[&apos;W&apos; + str(layer+1)].T, cache[&apos;dZ&apos; + str(layer+1)]) 
  							* inv_sigmoid(cache[&apos;Z&apos; + str(layer)])
  cache[&apos;dW&apos; + str(layer)] = np.dot(cache[&apos;dZ&apos; + str(layer)], cache[&apos;a&apos; + str(layer-1)].T)
  cache[&apos;db&apos; + str(layer)] = np.sum(cache[&apos;dZ&apos; + str(layer)], axis=1, keepdims=True)
</code></pre><h2 id="mnist-handwritten-digit-recognition">MNIST Handwritten Digit Recognition</h2><p>In above random data example, data and output doesn&apos;t make any sense so we will see real life example of handwritten digit classification. You already saw about dataset at start of this article.</p><h3 id="load-dataset">Load Dataset</h3><p>Now we will load data. Link to dataset is also given in <a>references and code</a> section.</p><pre><code class="language-python">import numpy as np
np.random.seed(95)

# Load CSV File
data = np.genfromtxt(&quot;mnist_train.csv&quot;,delimiter = &apos;,&apos;)

# first column is labels
y = data[:,0]

# rest 784 columns are features / pixel values.
X = data[:,1:785].T

# Some constants
K = 10  # No of classes
alpha = 0.1  # Learning rate
m = 60000  # No of examples

# convert to one hot encoded
Y = np.zeros((K,m))
for i in range(m):
  Y[int(y[i]),i] = 1

# print shape of input and output/label
print(&apos;Shape of X:&apos;,X.shape)
print(&apos;Shape of y:&apos;,y.shape)
print(&apos;Shape of Y:&apos;,Y.shape)
</code></pre><p><strong>Output</strong></p><!--kg-card-begin: html--><div>
<pre class="output">
Shape of X: (784, 60000)
Shape of y: (60000,)
Shape of Y: (10, 60000)
</pre>
</div><!--kg-card-end: html--><h3 id="initialize-weights-and-biases">Initialize Weights and Biases</h3><pre><code class="language-python"># units in each layer
# first number (784) is input dimension
units = [784, 128, 64, 10]

# Total layers
L = len(units) - 1

# parameter dictionary
parameters = dict()

for layer in range(1, L+1):
    parameters[&apos;W&apos; + str(layer)] = np.random.normal(0,1,(units[layer],units[layer-1]))
    parameters[&apos;b&apos; + str(layer)] = np.zeros((units[layer],1))
</code></pre><h3 id="define-sigmoid-and-derivative-of-sigmoid-function">Define Sigmoid and Derivative of Sigmoid Function</h3><pre><code class="language-python">def sigmoid(X):
    return 1 / (1 + np.exp(- X))

def inv_sigmoid(X):
    return sigmoid(X) * (1-sigmoid(X))
</code></pre><h3 id="forward-pass-1">Forward Pass</h3><pre><code class="language-python">cache = dict()
def forward_pass(X):
  cache[&apos;a0&apos;] = X

  for layer in range(1, L+1):
    cache[&apos;Z&apos; + str(layer)] = np.dot(parameters[&apos;W&apos; + str(layer)],cache[&apos;a&apos; + str(layer-1)]) + parameters[&apos;b&apos; + str(layer)]
    cache[&apos;a&apos; + str(layer)] = sigmoid(cache[&apos;Z&apos; + str(layer)])
</code></pre><h3 id="backward-pass-1">Backward Pass</h3><p>We will use batch size of 10 samples because of that you will find \(\frac{1}{10}\) instead of \(\frac{1}{m}\) (All data at a time).</p><pre><code class="language-python">def cost(y,y_hat):
  return -np.sum(y*np.log(y_hat) + (1-y)*(np.log(1-y_hat)))

def back_prop(Y):
  y_hat = cache[&apos;a&apos; + str(L)]
  cache[&apos;dZ&apos; + str(L)] = (1/10)*(y_hat - Y)
  cache[&apos;dW&apos; + str(L)] = np.dot(cache[&apos;dZ&apos; + str(L)], cache[&apos;a&apos; + str(L-1)].T)
  cache[&apos;db&apos; + str(L)] = np.sum(cache[&apos;dZ&apos; + str(L)], axis=1, keepdims=True)

  for layer in range(L-1,0,-1):
    cache[&apos;dZ&apos; + str(layer)] = np.dot(parameters[&apos;W&apos; + str(layer+1)].T, cache[&apos;dZ&apos; + str(layer+1)]) * inv_sigmoid(cache[&apos;Z&apos; + str(layer)])
    cache[&apos;dW&apos; + str(layer)] = np.dot(cache[&apos;dZ&apos; + str(layer)], cache[&apos;a&apos; + str(layer-1)].T)
    cache[&apos;db&apos; + str(layer)] = np.sum(cache[&apos;dZ&apos; + str(layer)], axis=1, keepdims=True)
</code></pre><h3 id="update-weights-and-biases">Update Weights and Biases</h3><pre><code class="language-python">def update_weights():
  for layer in range(1, L+1):
    parameters[&apos;W&apos; + str(layer)] = parameters[&apos;W&apos; + str(layer)] - alpha * cache[&apos;dW&apos; + str(layer)]
    parameters[&apos;b&apos; + str(layer)] = parameters[&apos;b&apos; + str(layer)] - alpha * cache[&apos;db&apos; + str(layer)]
</code></pre><h3 id="start-training">Start Training</h3><pre><code class="language-python">epoch = 30

for i in range(epoch):
  cost_tot = 0
  for j in range(6000):
    forward_pass(X[:,j*10:j*10+10])
    cost_tot += cost(Y[:,j*10:j*10+10],cache[&apos;a&apos; + str(L)])
    back_prop(Y[:,j*10:j*10+10])
    update_weights()
  if i%5 == 0:
    print(&apos;epoch &apos;,i,&apos; &apos;,cost_tot)
</code></pre><p><strong>Output</strong></p><!--kg-card-begin: html--><div>
<pre class="output">
epoch  0   119429.14804683565
epoch  5   102128.37509302271
epoch  10   90151.75500527128
epoch  15   86009.41961305328
epoch  20   88218.24177992699
epoch  25   90432.20939203199
epoch  30   92974.92502732007
epoch  35   93986.34837736617
epoch  40   92380.93127681084
epoch  45   90417.26686598927
epoch  50   101933.2601655828
</pre>
</div><!--kg-card-end: html--><p>Above output shows, cost is decrasing still \(45^{th}\) epoch but for \(50^{th}\) epoch, it is increased. Reason of this is <strong>Learning Rate</strong>. If learning rate is too small, training takes more time and if learning rate is too high then what happens you already experienced. Finding <strong>not too high, not too low</strong> learning rate is difficult problem for particular application. We will see another article with heading <strong>Hyper parameter tunning tricks</strong> in near future.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://avabodha.in/content/images/2021/07/image-36.png" class="kg-image" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch" loading="lazy" width="1235" height="479" srcset="https://avabodha.in/content/images/size/w600/2021/07/image-36.png 600w, https://avabodha.in/content/images/size/w1000/2021/07/image-36.png 1000w, https://avabodha.in/content/images/2021/07/image-36.png 1235w" sizes="(min-width: 1200px) 1200px"></figure><h3 id="check-some-samples">Check Some Samples</h3><pre><code class="language-python">forward_pass(X[:,0:10])

# axis 0 means max columnwise
# axis 1 means max rowwise
predicted = cache[&apos;a3&apos;].argmax(axis=0)
actual = Y[:,:10].argmax(axis=0)

print(&apos;Actual Labels: &apos;, actual)
print(&apos;Predicted Labels: &apos;,predicted)
</code></pre><p><strong>Output</strong></p><!--kg-card-begin: html--><div>
<pre class="output">
Actual Labels:  [5 0 4 1 9 2 1 3 1 4]
Predicted Labels:  [3 0 4 1 4 6 1 3 1 4]
</pre>
</div><!--kg-card-end: html--><h2 id="references-and-code">References and Code</h2><p><a href="https://pjreddie.com/media/files/mnist_train.csv">[1] Dataset : handwritten digit recognition</a></p><!--kg-card-begin: html--><div>You can find code (MNIST)<a target="_blank" href="https://colab.research.google.com/github/PSKP-95/pskp95-blog-codes/blob/master/Neural Network and Deep Learning/multiclass_classification_mnist.ipynb"><img style="float:right" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch"></a></div><!--kg-card-end: html--><!--kg-card-begin: html--><div>You can find code (Random Data)<a target="_blank" href="https://colab.research.google.com/github/PSKP-95/pskp95-blog-codes/blob/master/Neural Network and Deep Learning/multiclass_classification.ipynb"><img style="float:right" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Deep Neural Network with MNIST Digit Recognition Example from Scratch"></a></div><!--kg-card-end: html-->]]></content:encoded></item></channel></rss>