Monday, December 4, 2023

How TOPT Works: Generating OTPs Without Internet Connection

Introduction

Have you ever wondered how authentication apps like RSA Authenticator generate One-Time Passwords (OTPs) without requiring an internet connection? This fascinating technology is made possible through Time-Based One-Time Passwords (TOTP). In this article, we will explore the mechanics of TOTP, its security features, and why it doesn't rely on the internet at the client-side for generating OTPs.

Understanding TOPT

1. TOPT in a Nutshell

TOPT, or Time-Based One-Time Password, is a security feature designed to enhance the authentication process. It generates OTPs that are only valid for a short period, typically 30 seconds. TOPT uses a secret key, often shared between the server and the user's device, to generate these OTPs. The central idea is to provide a second factor of authentication, beyond just a static password, to strengthen security.

2. The RSA Authenticator App

One popular example of a TOPT implementation is the RSA Authenticator app. This app is commonly used for two-factor authentication and generates OTPs even when the device is offline. So, how does it work?

The Inner Workings of TOPT

1. Secret Key and Initialization

When setting up TOPT on a device, the user and the authentication server share a secret key. This key is securely stored on both sides and is crucial for generating the OTPs. The server also maintains a counter that increments every 30 seconds.

2. Time-Step and Hashing

To generate an OTP, the device combines the secret key with a time-step value. This time-step value is derived from the current time, typically using Unix time (the number of seconds since January 1, 1970). The resulting value is then hashed, often using the HMAC-SHA1 algorithm.

3. Presentation of OTP

The resulting hash is typically a 160-bit value, which is then truncated to obtain a 6 to 8 digit OTP. The truncation involves taking a subset of bits from the hash. The number of digits and the specific bits selected are implementation-dependent.

4. Rolling Code

The server, which is aware of the current time and the shared secret key, can perform the same process to generate an OTP. If the generated OTP matches the one presented by the user, access is granted.

Advantages of TOPT

1. Offline OTP Generation

One of the major advantages of TOPT is that it does not require an internet connection to generate OTPs. The algorithm is based on a predefined time-step, allowing the device and the server to independently generate OTPs at the same time. This is particularly useful when an internet connection is unavailable, ensuring users can still access their accounts securely.

2. Enhanced Security

TOPT significantly enhances security because the OTPs are time-bound and change frequently. Even if an attacker intercepts an OTP, it will be invalid within seconds, reducing the risk of unauthorized access.

TOTP: A Special Case of TOPT

1. What is TOTP?

TOTP, or Time-Based One-Time Password, is a specific implementation of TOPT. It uses the current time as the input for generating OTPs. TOTP ensures that OTPs are synchronized between the device and the server, allowing for secure authentication even when offline.

2. TOTP in Action

In TOTP, the user and the server share a secret key. The device and the server independently generate OTPs based on the current time. The time-step is typically set to 30 seconds, ensuring that OTPs remain valid for a short period.

Conclusion

TOPT, and specifically TOTP, play a crucial role in modern authentication systems. They provide an additional layer of security by generating time-bound OTPs, without requiring an internet connection at the client-side. This capability ensures that even when internet access is unavailable, users can still access their accounts securely. The use of secret keys and time-based calculations makes TOPT a robust and widely adopted security feature, strengthening the overall security of online services.

Demystifying Service Mesh: How it Works and Why You Need It

 Introduction:

In the ever-evolving landscape of modern application development and deployment, the concept of a "Service Mesh" has gained significant traction. As a tech blogger with over 12 years of experience, I'm here to provide a comprehensive update on this crucial topic. In this article, we'll delve into what a Service Mesh is, how it works, and why it has become an indispensable tool for managing complex microservices architectures.

What is a Service Mesh?

A Service Mesh is a dedicated infrastructure layer designed to facilitate communication between the microservices that make up an application. It acts as a transparent, language-agnostic network of interconnected components, providing essential functionalities such as service discovery, load balancing, security, and observability. The primary goal of a Service Mesh is to enhance the reliability, security, and manageability of microservices-based applications.

How Does it Work?

Now, let's dive deeper into how a Service Mesh actually works:

  1. Sidecar Proxy: At the heart of a Service Mesh, you'll find a sidecar proxy. Every microservice in the application is paired with its own proxy, effectively forming a "sidecar." These sidecar proxies are responsible for intercepting all inbound and outbound network traffic to and from the microservice they are attached to.
  2. Service Discovery: When a microservice needs to communicate with another service, it queries the Service Mesh for the location of the target service. The Service Mesh provides dynamic service discovery, ensuring that services can locate each other regardless of their changing IP addresses or locations.
  3. Load Balancing: Service Meshes implement sophisticated load balancing algorithms, distributing incoming requests evenly across instances of a service. This helps in optimizing resource utilization and ensuring high availability.
  4. Security: Security is a top priority in microservices architectures. Service Meshes offer robust security features like mutual TLS (mTLS) encryption, authentication, and authorization. With mTLS, all communication between microservices is encrypted and authenticated, significantly enhancing the overall security posture.
  5. Traffic Management: Service Meshes allow for fine-grained traffic control and routing. This means you can implement A/B testing, canary releases, and gradual rollouts with ease, all while monitoring the impact on your application's performance and stability.

Observability and Monitoring: Service Meshes provide rich observability features, including metrics, logging, and tracing. This enables DevOps teams to gain deep insights into the behavior of their microservices and diagnose issues quickly.

Why You Need a Service Mesh:

Now, you might wonder why Service Meshes have gained such popularity. Here are a few key reasons:

  1. Microservices Complexity: As applications become more microservices-oriented, managing the complexity of service-to-service communication becomes increasingly challenging. Service Meshes provide a centralized solution for handling this complexity.
  2. Resilience and Reliability: With features like load balancing, circuit breaking, and automatic retries, Service Meshes improve the overall resilience of your application. They can handle failures gracefully, reducing downtime and improving user experience.
  3. Security: Service Meshes enhance the security of your microservices by implementing encryption and authentication. This is crucial, especially in multi-cloud or hybrid cloud environments.
  4. Observability: The ability to monitor and troubleshoot your microservices is essential for maintaining high availability and performance. Service Meshes offer a wealth of observability tools that simplify this process.

Conclusion:

In the world of modern application development, a Service Mesh has become more of a necessity than a luxury. It offers a unified solution for managing the complexities of microservices architectures, ensuring reliability, security, and observability. 

Tuesday, May 5, 2020

Creating first Jenkins pipeline: tutorial


Jenkins uses a feature called Jenkins Pipeline which is a collection of jobs that brings the software from version control into the hands of the end-users by using automation tools. They represent multiple Jenkins jobs as one whole workflow in the form of a pipeline.

In this blog, I am going to share my knowledge on how can we write multiple Jenkins jobs as a pipeline and it uses two different syntaxes i.e. Declarative and Scripted pipeline and in our examples, we're going to use the Scripted Pipeline which is following a more imperative programming model built with Groovy.


Prerequisite:
  • Code on bitbucket/GitHub
  • Jenkins Installation
  • Download required plugins to run pipelines like Pipeline, SonarQube Scanner, Check Style, Junit, Git Integration, Maven Integration.
  • Sonar up and running. 
Let’s start creating pipeline will do below tasks:
  • Clone Project from Jenkins
  • Build and run Junit test cases
  • Run Sonar
  • Run Checkstyle
  • Package it as a jar file


 
Configuration Steps: 
  • Let's create new Jenkins jobs. Goto Jenkins -> New Item
  • Add name under 'Enter an item name', Select pipeline as the type, and click Ok button.

  • I am skipping the description and others tab here and directly jumping to the Pipeline tab as I already discussed it in my previous blog and we can run pipeline without worrying about it.
  • Add below script and check Use Groovy Sandbox and Save it.

node {
// clone the project from Github
    stage('Clone'){
    git 'https://github.com/abdulwaheed18/demo.git'
}
//Build the project
stage('Build'){
  sh "mvn clean install"
}
// Run Sonar for Code Coverage
       // Ignore this stage if sonar instance is not present
stage('Sonar') {
sh "mvn sonar:sonar"
}
// Run code check
stage("Checkstyle") {
        sh "mvn checkstyle:checkstyle"
         
        step([$class: 'CheckStylePublisher',
          canRunOnFailed: true,
          defaultEncoding: '',
          healthy: '100',
          pattern: '**/target/checkstyle-result.xml',
          unHealthy: '90',
          useStableBuildAsReference: true
        ])
    }
    //package the application
     stage('Package') {
         junit '**/target/surefire-reports/TEST-*.xml'
         archiveArtifacts 'target/*.jar'
   }
}



  • To configure Sonarqube URL, Goto Jenkins -> Manage Jenkins -> Configure System and set Server URL and save it. 
  • You can see your newly created pipeline on the Jenkins dashboard

  • Click on Jenkins-pipeline-demo and then on the right side, click on Build now to build the project, to start the Jenkins pipelines.

  • Once your job is completed, you will see below screen 

  • As the final job was packing as the jar. you can see a blue downward arrow button clicking on which will download your application as a JAR file.
  • you can check the logs by clicking on the blue circle button on the left side or you can hover over a stage cell and click the Logs button.

  • To Check Sonar report, goto Sonar Server URL that you configured it. It will show you total code coverage, unused import, and bad code.

  • We had also added the Checkstyle stage to the pipeline so to check the report. Click on the Checkstyle Warning present below build now link.

  • Here we see 12 High Priority Warning browsable by clicking it. The Details tab gives you more insight into each class error. 
Conclusion :
We are able to set up a simple Jenkins pipeline to show code pull, build, to run sonar, and other code analysis tools, and as always the source code used in this project can be found over on Github.


Wednesday, April 22, 2020

Continuous Integration with Jenkins and Spring Boot App


Jenkins can be used for multiple purposes like whenever any developer commits any code changes to SCM, Jenkins triggers job which can Checkout the code, build it, run JUnit test case, run tools like sonar or checkmarx and if everything works properly then deploy it to some instance.

In this tutorial, I’ll share my knowledge on how can we automate our test process by introducing CI like Jenkins. We will configure Jenkins such that it should trigger it whenever any code commits by any developer to SCM, pull out the code from GIT, run maven to build and test the code.

Prerequisite:
  • Code upload on GIT (or use this link)
  • Jenkins up and running. Refer here for installation.
  Once your Jenkins is up and running then you will see below screen


  Configuration Steps:
·       Install the required plugin.
o   Make sure all the required plugins like GIT, Maven are already present
o   Goto Manage Jenkins -> manage plugins -> Installed tab


o   In case, if you don’t find the required plugin then search under the Available tab. E.g. to install the Maven Integration plugin.
o   Select plugin and click on Download now and install after restart.

o   It will take some time to download and install the plugin as per your network bandwidth.



·    
     Set Java and Maven Path
o   by default, Jenkins pick up the JAVA and Maven path running on Ubuntu instance but if you have any specific java or maven directory then you can configure those as well.
o   Goto Manage Jenkins -> Global tool configuration

·   
      Generate SSH keygen for Jenkins user
o   If you want to access a private Git repo, for example at GitHub, you need to generate an ssh key-pair. Create a SSH key with the following command.

o   The public-key must be uploaded to the service you are using, e.g., GitHub.


·       Setting up Jenkins job
o   The build of a project is handled via jobs in Jenkins. Click New Item. Afterward, enter a name for the job and select the Freestyle Project and press OK.

o   Add description of the job
o   Check Discard old builds. It will help you in cleaning the stale log. You can choose the log rotation based on the number of builds or days. Always enable and choose this option to make sure you are not running out of memory of Jenkins log.

o   Under Source code Management choose Git as we are going to checkout the code from GIT.SET git url (https://github.com/abdulwaheed18/demo.git) and no need of credential as my repo is public and set branch as master.

o   Under Build triggers, choose poll SCM so that it should run this job whenever someone commit the code to the master branch.
o   Schedule value in * which defines the time.


o   Which means pull the SCM every day of every month and every minute of every hour.
o   Select top-level maven targets under Build and set Goals as clean package.

o   Apply and save the configuration.

·       Run the build
o   Job is created as jenkins-demo.

o   Click on jenkins-demo and then Build now to run the newly created job.

·       Congratulation! Your first job is created successfully and the blue circle under Build history means there was no issue during job execution and it ran successfully.






Tuesday, April 21, 2020

Jenkins Installation on Ubuntu

Jenkins is an open source Continuous Integration server capable of orchestrating a chain of actions that help to achieve the Continuous Integration process (and not only) in an automated fashion.

Jenkins is free and is entirely written in Java. Jenkins is a widely used application around the world that has around 300k installations and growing day by day.

It is a server-based application and requires a web server like Apache Tomcat. The reason Jenkins became so popular is that of its monitoring of repeated tasks which arise during the development of a project. For example, if your team is developing a project, Jenkins will continuously test your project builds and show you the errors in early stages of your development.

By using Jenkins, software companies can accelerate their software development process, as Jenkins can automate build and test at a rapid rate. Jenkins supports the complete development lifecycle of software from building, testing, documenting the software, deploying and other stages of a software development lifecycle.

Prerequisite

  • Make sure you are logged in as a user with sudo privileges
  • Java 8 or later version up and running
    • sudo apt-get update
    • sudo apt-get install default-jdk
    • sudo java –version
  • Maven is up & running
    • Sudo apt-get install maven
    • Sudo mvn --version


Installation steps:
  • Add repository key to System
    •  wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
  • Append the Debian package repository address to the server’s sources.list:
    •  sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
  • sudo apt-get update
  • sudo apt-get install Jenkins
  • Start the Jenkins
    • sudo service jenkins start


By default, Jenkins will start on port 8080, find the IP address of your ubuntu using ifconfig command and try hitting http://ip:8080 from your browser to view the Jenkins dashboard.



Above screen is asking for Jenkins initial password which can be retrieved from initialAdminPassword file.

sudo cat /var/lib/jenkins/secrets/initialAdminPassword



From the next screen, click on install suggested plugin which will immediately begin the installation process:




Once done with the installation, you will be asked to set up the first administrative user. You can skip this step and continue as admin using the initial password we used above or we can create the Jenkins user.


Update all the fields and click on save and continue button. The next screen will be instance Configuration page that will ask to confirm the preferred URL for your Jenkins instance. Click Save and Finish.





Click Start using Jenkins to visit the main Jenkins dashboard:

















Congratulation! your Jenkins is up and running on your Ubuntu instance.

Wednesday, January 8, 2020

Setting up Lombok with SpringToolSuite and Intellij Idea

Lombok is a java library that you can plug into your editor which will automatically generate code in .class file instead of in source file.
E.g:  getters, setters toString, equals, hashcode, builder, loggers, and many others.

In this tutorial, I’ll talk about configuring it in two of the most popular IDEs- IntelliJ IDEA and Spring Tool Suite.

Check my Github repo to learn project Lombok using java source code.

Note: Step for installing the plugin for Eclipse and Spring tool Suite (STS) are the same.

Steps to configure Lombok in STS


  • Download Lombok jar from the Lombok site.
  • Double click on Lombok jar which will open below Installer wizard, Choose IDEs in which you want to install. If your not IDE is not listed then you can browse using Specify location tab.

  • Once selected, click on Install/update button and you are done.



Steps to configure Lombok in IntelliJ IDEA

  • Open IntelliJ Idea and click on File-> Settings…
  • Click on Plugin option and then search for Lombok
  • Click on the Install button on the plugin page



  • Once done with the installation, click on the restart IDE button.

Friday, November 8, 2019

Streaming Spring boot logs to ELK stack

In my previous blog, we have done ELK installation on windows 10 and we have even tried to push messages from input console to Elastic Search and finally viewed on Kibana Server.

I will write a separate blog on why do we need ELK?

In this blog, I’ll show you how can we push spring boot application log directly to Elastic search using Logstash which we can analyze on Kibana and If you don’t know how to install ELK on windows 10 then you can refer my previous blog and start Elastic Search and Kibana server.

Prerequisite


  • Elastic Search and Kibana running on your machine
  • Basic knowledge of Spring boot application


If you don’t want to start your application from scratch then you can download one spring boot application from my GitHub repository as well.

I am assuming that the Elastic Search and Kibana server are running on your machine and you have a fair idea of how to start the Logstash server and what is Logstash conf file.

So, to push spring boot logs continuously to Elastic Server, We have to open one TCP port in Logstash server and for that we have to create one Logstash config file (say elklogstash.conf) under ${LOGSTASH_HOME}/conf directory mentioning on which port TCP port should be listening under input filter and where to push the data once we received under Output filter.

For simplicity, I am skipping the filter tag as it is optional.

elklogstash.conf




Now start the Logstash server bypassing newly created conf file.
   bin\logstash -f .\config\elklogstash.conf



Cool! Now Logstash server is also up and running and if you observe the log, you will realize that it is also listening on port 4560 as mentioned in the conf file. Configure the newly created index (elkbootlogs) on Kibana as we have done during the ELK setup.

Now let's do some changes to spring boot application so that it can push all the logs to 4056 TCP port.

For this tutorial, I am using spring-logger project from my Github repository.

Add below dependency to the pom.xml file. We need Logstash encoder to encode messages.

<!-- Added for logstash Encoder-->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.2</version>

</dependency>

Open logback-spring.xml file which is under the resource folder and create new appender (say elk). The task of this appender is to push logs to the destination TCP socket and under this appender, compulsory use LogstashEncoder.

<appender name="elk" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>localhost:4560</destination>
    <!-- encoder is required -->
    <encoder class="net.logstash.logback.encoder.LogstashEncoder" />

</appender>

Add new appender to root level

<!-- LOGGING everything at INFO level -->
<root level="info">
<appender-ref ref="RollingFile" />
<appender-ref ref="Console" />
<appender-ref ref="elk" />
</root>

Save all files and start your application. So, we are done with all the setup. Its time to check whether all the changes are done properly or not.

Open Kibana on your browser (http://localhost:5601) and select your index under the Discover tab. You will see all logs are populating on Kibana as well.



Congratulations! Our configuration is working absolutely fine and it is pushing logs to Elastic Search. 

You can download the source code from here, ELK code chnages are under elkstack branch.






How TOPT Works: Generating OTPs Without Internet Connection

Introduction Have you ever wondered how authentication apps like RSA Authenticator generate One-Time Passwords (OTPs) without requiring an i...