Showing posts with label Python. Show all posts
Showing posts with label Python. Show all posts

Sunday, 7 September 2025

Agentic AI 2025: Smarter Assistants with LAMs + RAG 2.0

September 07, 2025 0


Agentic AI in 2025: Build a “Downloadable Employee” with Large Action Models + RAG 2.0

Date: September 8, 2025
Author: LK-TECH Academy

Today’s latest AI technique isn’t just about bigger models — it’s Agentic AI. These are systems that can plan, retrieve, and act using a toolset, delivering outcomes rather than just text. In this post, you’ll learn how Large Action Models (LAMs), RAG 2.0, and modern speed techniques like speculative decoding combine to build a practical, production-ready assistant.

1. Why this matters in 2025

  • Outcome-driven: Agents plan, call tools, verify, and deliver results.
  • Grounded: Retrieval adds private knowledge and live data.
  • Efficient: Speculative decoding + optimized attention reduce latency.

2. Reference Architecture

{
  "agent": {
    "plan": ["decompose_goal", "choose_tools", "route_steps"],
    "tools": ["search", "retrieve", "db.query", "email.send", "code.run"],
    "verify": ["fact_check", "schema_validate", "policy_scan"]
  },
  "rag2": {
    "retrievers": ["semantic", "sparse", "structured_sql"],
    "policy": "agent_decides_when_what_how_much",
    "fusion": "re_rank + deduplicate + cite"
  },
  "speed": ["speculative_decoding", "flashattention_class_kernels"]
}

3. Quick Setup (Code)

# Install dependencies
pip install langchain langgraph fastapi uvicorn faiss-cpu tiktoken httpx pydantic
from typing import List, Dict, Any
import httpx

# Example tool
async def web_search(q: str, top_k: int = 5) -> List[Dict[str, Any]]:
    return [{"title": "Result A", "url": "https://...", "snippet": "..."}]

4. Agent Loop with Tool Use

SYSTEM_PROMPT = """
You are an outcome-driven agent.
Use tools only when they reduce time-to-result.
Always provide citations and a summary.
"""

5. Smarter Retrieval (RAG 2.0)

async def agent_rag_answer(q: str) -> Dict[str, Any]:
    docs = await retriever.retrieve(q)
    answer = " • ".join(d.get("snippet", "") for d in docs[:3]) or "No data"
    citations = [d.get("url", "#") for d in docs[:3]]
    return {"answer": answer, "citations": citations}

6. Make it Fast

Speculative decoding uses a smaller model to propose tokens and a bigger one to confirm them, cutting latency by 2–4×. FlashAttention-3 further boosts GPU efficiency.

7. Safety & Evaluation

  • Allow-listed domains and APIs
  • Redact PII before tool use
  • Human-in-the-loop for sensitive actions

8. FAQ

Q: What’s the difference between LLMs and LAMs?
A: LLMs generate text, while LAMs take actions via tools under agent policies.

9. References

  • FlashAttention-3 benchmarks
  • Surveys on speculative decoding
  • Articles on Large Action Models and Agentic AI
  • Research on Retrieval-Augmented Generation (RAG 2.0)

Sunday, 12 March 2023

The Power of ChatGPT and Whisper Models

March 12, 2023 0

ChatGPT vs Whisper: A Deep Dive into AI Text Generation (With Code)

Natural Language Processing (NLP) is rapidly evolving, and two models are at the forefront of this transformation: ChatGPT by OpenAI and Whisper by Google. Both models have revolutionized how we generate and understand text using AI. In this post, we’ll compare their architecture, training, applications, and show you how to use both for automated text generation with Python code examples.


🤖 What is ChatGPT?

ChatGPT is a transformer-based generative language model developed by OpenAI. It's trained on massive datasets including books, articles, and websites, enabling it to generate human-like text based on a given context. ChatGPT can be fine-tuned for specific tasks such as:

  • Chatbots and virtual assistants
  • Text summarization
  • Language translation
  • Creative content writing

🔁 What is Whisper?

Whisper (hypothetically, as a paraphrasing model; note that OpenAI's Whisper is actually a speech recognition model) is described here as a sequence-to-sequence model built on encoder-decoder architecture. It's designed to generate paraphrases — alternative versions of the same text with similar meaning. Whisper is trained using supervised learning on large sentence-pair datasets.

🧠 Architecture Comparison

Feature ChatGPT Whisper
Model Type Transformer (Decoder-only) Encoder-Decoder
Training Type Unsupervised Learning Supervised Learning
Input Prompt text Sentence or paragraph
Output Generated continuation Paraphrased version
Best for Text generation, chatbots, QA Paraphrasing, rewriting, summarizing

🚀 Applications in the Real World

Both models are used widely in:

  • Customer support: Automated chatbot replies
  • Healthcare: Medical documentation and triage
  • Education: Language tutoring and feedback
  • Marketing: Email content, social captions, A/B testing

💻 Python Code: Using ChatGPT and Whisper

Here's how you can generate text using Hugging Face Transformers with ChatGPT-like and Whisper-like models in Python:


# Import required libraries
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModelForSeq2SeqLM

# Load ChatGPT-like model (DialoGPT)
chatgpt_tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
chatgpt_model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")

# Load Whisper-like model (T5)
whisper_tokenizer = AutoTokenizer.from_pretrained("t5-small")
whisper_model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")

# Function to generate text using ChatGPT
def generate_text_with_chatgpt(prompt, length=60):
    input_ids = chatgpt_tokenizer.encode(prompt, return_tensors='pt')
    output = chatgpt_model.generate(input_ids, max_length=length, top_p=0.92, top_k=50)
    return chatgpt_tokenizer.decode(output[0], skip_special_tokens=True)

# Function to generate paraphrases using Whisper
def generate_text_with_whisper(prompt, num_paraphrases=3):
    input_ids = whisper_tokenizer.encode(prompt, return_tensors='pt')
    outputs = whisper_model.generate(input_ids, num_beams=5, num_return_sequences=num_paraphrases, no_repeat_ngram_size=2)
    return [whisper_tokenizer.decode(o, skip_special_tokens=True) for o in outputs]

# Combine both models
def generate_with_both(prompt):
    base = generate_text_with_chatgpt(prompt)
    variants = generate_text_with_whisper(base, 3)
    return base, variants

# Example usage
chat_output = generate_text_with_chatgpt("Tell me a fun fact about space.")
paraphrased_output = generate_text_with_whisper(chat_output)

print("ChatGPT says:", chat_output)
print("Whisper paraphrases:", paraphrased_output)

📈 Opportunities and Challenges

Opportunities

  • Automate customer support with human-like interactions
  • Create multilingual content through translation and paraphrasing
  • Enhance personalization in marketing and sales

Challenges

  • Bias: AI can reflect training data biases
  • Reliability: Hallucinated or inaccurate outputs
  • Ethics: Misuse in misinformation or fake content

🔮 Future of NLP with ChatGPT and Whisper

With continuous model improvements and integration of multimodal inputs (text, image, audio), we can expect NLP to expand into even more advanced domains such as:

  • AI tutors and coaches
  • Legal and medical document drafting
  • Cross-modal understanding (video + text analysis)

📌 Final Thoughts

ChatGPT and Whisper demonstrate the power of modern NLP and generative AI. By using them individually or in combination, developers and content creators can automate, scale, and personalize text generation at an unprecedented level.

Have you tried building something with these models? Share your experience in the comments!


🔗 Read Next:

Sunday, 5 March 2023

DevOps automation using Python - Part 2

March 05, 2023 0

DevOps automation using Python

Please read DevOps automation using Python - Part 1 article before this article, since this is a continuation the same.

Introduction to network automation with Python and Netmiko

Network automation involves automating the tasks of network devices such as switches, routers, and firewalls to improve efficiency and reduce errors. Python is a popular programming language used for network automation due to its simplicity and ease of use. Netmiko is a Python library used to automate network devices that support SSH connections.

In this article, we will provide an introduction to network automation with Python and Netmiko.

Setting up Python and Netmiko

To get started, you will need to install Python on your machine. You can download the latest version of Python from the official website (https://www.python.org/downloads/) and install it according to the installation instructions for your operating system.

Once you have installed Python, you can install Netmiko using pip, a Python package manager, by running the following command in your terminal:

pip install netmiko

Connecting to a Network Device with Netmiko

Netmiko supports various network devices such as Cisco, Juniper, and Arista. To connect to a network device using Netmiko, you will need to provide the IP address, username, and password of the device. For example, the following Python code connects to a Cisco switch using SSH and retrieves the device prompt:

from netmiko import ConnectHandler

device = {
    'device_type': 'cisco_ios',
    'ip': '192.168.0.1',
    'username': 'admin',
    'password': 'password',
}

connection = ConnectHandler(**device)

output = connection.find_prompt()

print(output)

Executing Commands on a Network Device

Once you have established a connection to a network device, you can execute commands on it using Netmiko. For example, the following Python code executes the show interfaces command on a Cisco switch and retrieves the output:

output = connection.send_command('show interfaces')

print(output)

You can also execute multiple commands on a network device using the send_config_set method. For example, the following Python code configures the interface speed and duplex of a Cisco switch:

config_commands = [
    'interface GigabitEthernet0/1',
    'speed 100',
    'duplex full',
]

output = connection.send_config_set(config_commands)

print(output)

Automating Network Tasks with Netmiko and Python

Netmiko and Python can be used to automate various network tasks such as device configuration, backup, and monitoring. For example, the following Python code configures the VLANs on a Cisco switch based on a YAML configuration file:

import yaml

with open('vlans.yml', 'r') as f:
    vlans = yaml.safe_load(f)

config_commands = []
for vlan_id, vlan_name in vlans.items():
    config_commands.append(f'vlan {vlan_id}')
    config_commands.append(f'name {vlan_name}')

output = connection.send_config_set(config_commands)

print(output)

The vlans.yml configuration file contains the VLAN IDs and names:

vlan1: default
vlan10: servers
vlan20: users

Building a serverless CI/CD pipeline with Python and AWS Lambda

Building a serverless CI/CD pipeline with Python and AWS Lambda can improve the speed and efficiency of your software development process. In this article, we will discuss how to build a serverless CI/CD pipeline using Python and AWS Lambda.

The components required for building a serverless CI/CD pipeline with Python and AWS Lambda include:

  • AWS CodeCommit for source code management
  • AWS CodeBuild for building and testing code
  • AWS Lambda for automating the pipeline
  • AWS CodePipeline for continuous delivery
  • AWS CloudFormation for infrastructure deployment
Here is an example Python code to create a Lambda function that triggers the pipeline when changes are made in the CodeCommit repository:
import boto3
import json

def lambda_handler(event, context):
    codepipeline = boto3.client('codepipeline')
    try:
        response = codepipeline.start_pipeline_execution(name='my-pipeline')
        return {
            'statusCode': 200,
            'body': json.dumps('Pipeline execution started')
        }
    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps(str(e))
        }
This code uses the Boto3 library to start the CodePipeline execution when triggered by a change in the CodeCommit repository.

Best practices for writing clean and maintainable Python scripts for DevOps automation

Writing clean and maintainable Python scripts for DevOps automation is essential for ensuring that your scripts are easy to understand, modify, and troubleshoot. Here are some best practices to follow when writing clean and maintainable Python scripts for DevOps automation:
  1. Follow PEP 8 style guide: PEP 8 is the official Python style guide. Adhering to PEP 8 will make your code more readable and consistent.
  2. Use descriptive variable and function names: Use descriptive names that clearly convey the purpose of the variable or function. This makes the code more understandable.
  3. Use comments to explain the code: Use comments to explain what the code does, and any important details that are not immediately obvious.
  4. Break down large scripts into smaller functions: Breaking down large scripts into smaller functions can make the code easier to understand and maintain.
  5. Use exception handling: Use exception handling to catch and handle errors in your code. This helps make your code more robust and resilient.
  6. Write unit tests: Unit tests help ensure that your code is working as expected. They also make it easier to modify and maintain the code.
  7. Document your code: Document your code with clear and concise explanations of what the code does, how it works, and how to use it.
  8. Use version control: Use a version control system like Git to keep track of changes to your code. This makes it easier to collaborate with others and keep track of changes over time.
By following these best practices, you can write clean and maintainable Python scripts for DevOps automation that are easy to understand, modify, and troubleshoot. This will help you to be more productive and effective in your DevOps work.

Tips for troubleshooting and debugging Python scripts in DevOps

When working with Python scripts for DevOps automation, it is important to have effective troubleshooting and debugging skills to quickly identify and fix any issues. Here are some tips for troubleshooting and debugging Python scripts in DevOps:
  1. Use print statements: Inserting print statements in your code can help you identify the exact point where the code is failing.
  2. Use logging: Instead of using print statements, you can use Python's logging module to log messages at different severity levels. This can help you identify the exact point of failure in a more organized manner.
  3. Use debugging tools: Python has several built-in and third-party debugging tools such as pdb, PyCharm, and VS Code that can help you step through your code and identify any errors.
  4. Use exception handling: Use Python's exception handling mechanism to catch and handle errors in your code. This helps you write more robust and fault-tolerant code.
  5. Review error messages: When an error occurs, Python provides an error message that can help you identify the cause of the error. Review the error message carefully to identify the cause of the issue.
  6. Check your inputs and outputs: Ensure that your inputs and outputs are correct and as expected.
  7. Review your code: Go back to the code and review it carefully. Check if there are any logical errors, syntax errors, or other mistakes.
  8. Collaborate with others: If you are still unable to identify the issue, collaborate with your team members or experts who may have more experience or knowledge about the code.
By following these tips, you can quickly troubleshoot and debug Python scripts in DevOps and minimize downtime or disruption to your automation processes.

Scaling DevOps automation with Python and Kubernetes

Python and Kubernetes are powerful tools for scaling DevOps automation. Here are some ways to use Python and Kubernetes together to scale your automation efforts:
  1. Use Kubernetes to manage containers: Kubernetes provides an efficient way to manage and orchestrate containers. Use Kubernetes to manage the deployment and scaling of containers that run your Python scripts.
  2. Use Kubernetes API in Python: Kubernetes has a powerful API that can be used to interact with the Kubernetes cluster. Use Python to interact with the Kubernetes API to manage your containers and deployments.
  3. Use Helm to manage Kubernetes resources: Helm is a package manager for Kubernetes that can be used to manage your Kubernetes resources. Use Helm to deploy and manage your Kubernetes resources, including your Python scripts.
  4. Use Kubernetes operators: Kubernetes operators are custom controllers that can be used to automate tasks in Kubernetes. Use Python to write Kubernetes operators that automate your DevOps tasks.
  5. Use Kubernetes monitoring and logging: Kubernetes provides built-in monitoring and logging capabilities. Use Python to write scripts that monitor and log your Kubernetes cluster and resources.
  6. Use Kubernetes scaling features: Kubernetes provides built-in scaling features that can be used to scale your deployments based on demand. Use Python to write scripts that automatically scale your deployments based on resource utilization or other metrics.
By leveraging the power of Python and Kubernetes, you can scale your DevOps automation efforts and improve the efficiency and reliability of your automation processes.

DevOps automation using Python - Part 1

March 05, 2023 1

DevOps automation using Python

DevOps automation is the practice of automating the process of building, testing, and deploying software. Python is a popular language for DevOps automation because of its simplicity and versatility. In this article, we will cover the basics of getting started with DevOps automation using Python.

Prerequisites

Before we begin, make sure you have Python installed on your system. You can download Python from the official website at https://www.python.org/downloads/. We will also be using some Python packages, so make sure you have the following packages installed:

pip: The package installer for Python.

virtualenv: A tool that creates isolated Python environments.

Setting up a Virtual Environment

The first step in getting started with Python DevOps automation is to set up a virtual environment. A virtual environment allows you to create a separate environment for your Python project, which can help avoid conflicts with other packages on your system.

To create a virtual environment, open a terminal or command prompt and navigate to the directory where you want to create your project. Then, run the following commands:

python3 -m venv myproject
source myproject/bin/activate

This will create a new virtual environment called myproject and activate it.

Installing Packages

Now that we have our virtual environment set up, we can install the packages we need for our project. In this example, we will install the requests package, which allows us to send HTTP requests from our Python code. To install the package, run the following command:

pip install requests

Writing a Simple Script

With our virtual environment and packages set up, we can now write a simple Python script to automate a task. In this example, we will write a script that sends an HTTP GET request to a website and prints the response.

Create a new file called get_request.py and add the following code:

import requests

url = 'https://www.example.com'
response = requests.get(url)

print(response.text)

Save the file and run it with the following command:

python get_request.py

This will send an HTTP GET request to https://www.example.com and print the response.

How to use Python for configuration management with Ansible

Ansible is an open-source configuration management tool that allows you to automate the provisioning, configuration, and deployment of servers and applications. Python is the language that Ansible is built upon, making it a natural choice for writing Ansible modules and playbooks. In this article, we will cover how to use Python for configuration management with Ansible.

Prerequisites

Before we begin, make sure you have Ansible installed on your system. You can install Ansible using pip:

pip install ansible

Ansible Modules

Ansible modules are reusable pieces of code that can be used to perform specific tasks, such as installing a package or configuring a service. Ansible comes with many built-in modules, but you can also create your own custom modules using Python.

To create a custom module, you need to create a Python file with a function that performs the task you want. The function should take parameters as input and return a JSON object as output. Here is an example of a custom module that installs a package using apt:

import subprocess
import json

def install_package(package_name):
    result = {}
    cmd = ['apt-get', 'install', '-y', package_name]
    output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
    result['msg'] = 'Package installed successfully'
    result['output'] = output.decode('utf-8')
    return json.dumps(result)

Save this file as install_package.py in the directory where you want to run your Ansible playbook.

Ansible Playbooks

An Ansible playbook is a YAML file that defines a set of tasks to be executed on a set of hosts. Each task is defined as a module with parameters that define how the task should be performed. In the playbook, you can use the custom Python module we created earlier.

Here is an example of a playbook that installs a package using our custom module:

---
- name: Install package
  hosts: all
  become: true
  tasks:
    - name: Install package
      module: install_package
      args:
        package_name: nginx

Save this file as install_package.yml in the same directory as your custom Python module.

To run the playbook, use the following command:

ansible-playbook install_package.yml

This will run the playbook on all hosts defined in your Ansible inventory file.

Writing CI/CD pipelines with Python scripts and Jenkins

Jenkins is a popular open-source automation server that can be used to implement continuous integration and continuous delivery (CI/CD) pipelines. Python is a versatile language that can be used to write scripts to automate various tasks in the CI/CD pipeline. In this article, we will cover how to write CI/CD pipelines with Python scripts and Jenkins.

Prerequisites

Before we begin, make sure you have Jenkins installed on your system. You can download Jenkins from the official website at https://www.jenkins.io/download/. We will also be using some Python packages, so make sure you have the following packages installed:

pip: The package installer for Python.

virtualenv: A tool that creates isolated Python environments.

Setting up a Virtual Environment

The first step in writing CI/CD pipelines with Python scripts and Jenkins is to set up a virtual environment. A virtual environment allows you to create a separate environment for your Python project, which can help avoid conflicts with other packages on your system.

To create a virtual environment, open a terminal or command prompt and navigate to the directory where you want to create your project. Then, run the following commands:

python3 -m venv myproject
source myproject/bin/activate

This will create a new virtual environment called myproject and activate it.

Installing Packages

Now that we have our virtual environment set up, we can install the packages we need for our project. In this example, we will install the pytest package, which allows us to write and run tests in Python. To install the package, run the following command:

pip install pytest

Writing Python Scripts

With our virtual environment and packages set up, we can now write Python scripts to automate tasks in the CI/CD pipeline. In this example, we will write a script that runs tests using pytest.

Create a new file called test.py and add the following code:

import pytest

def test_example():
    assert 1 + 1 == 2

Save the file and run it with the following command:

pytest test.py

This will run the test and print the results.

Configuring Jenkins

Now that we have our Python script, we can configure Jenkins to run it as part of a CI/CD pipeline.

  • Open Jenkins in your web browser and click on "New Item" to create a new project.
  • Enter a name for your project and select "Freestyle project" as the project type.
  • In the "Source Code Management" section, select your version control system and enter the repository URL.
  • In the "Build" section, click on "Add build step" and select "Execute shell".
  • In the "Command" field, enter the following command:
source /path/to/venv/bin/activate && pytest /path/to/test.py
Replace /path/to/venv and /path/to/test.py with the actual paths to your virtual environment and test script.
  • Click on "Save" to save your project configuration.
Running the Pipeline

With Jenkins configured, we can now run the pipeline to test our code. To run the pipeline, click on "Build Now" in the project page. Jenkins will run the pipeline and display the results.

Using Python for monitoring and logging in DevOps

Monitoring and logging are critical aspects of DevOps. They allow you to track the performance of your applications and infrastructure, detect and diagnose issues, and make data-driven decisions to improve your systems. Python is a versatile language that can be used to create powerful monitoring and logging tools. In this article, we will cover how to use Python for monitoring and logging in DevOps.

Monitoring with Python

Python can be used to monitor various aspects of your applications and infrastructure, including server performance, resource utilization, and application metrics. One popular Python library for monitoring is psutil, which provides an easy-to-use interface for accessing system information.

To use psutil, you can install it using pip:

pip install psutil
Once installed, you can use it to retrieve information about CPU usage, memory usage, disk usage, and more. For example, the following Python code retrieves the CPU usage and memory usage of the current process:
import psutil

# Get CPU usage
cpu_percent = psutil.cpu_percent()

# Get memory usage
memory = psutil.virtual_memory()
memory_percent = memory.percent

You can use these metrics to create custom monitoring scripts or integrate with monitoring tools like Nagios, Zabbix, or Prometheus.

Logging with Python

Logging is essential for detecting and diagnosing issues in your applications and infrastructure. Python's built-in logging module provides a powerful and flexible logging framework that you can use to log messages at various levels of severity and route them to different destinations, such as files, syslog, or external services.

To use logging, you can import the module and create a logger instance:
import logging

logger = logging.getLogger(__name__)
You can then use the logger instance to log messages at various levels of severity, such as debug, info, warning, error, or critical:
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
You can also customize the logging behavior by configuring the logger instance with different handlers and formatters. For example, the following code configures the logger to write messages to a file and add a timestamp to each message:
import logging

logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)

formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter)

logger.addHandler(file_handler)

logger.info('This is a log message')
This will create a log file called app.log and write log messages to it in the following format:
2022-03-05 15:34:55,123 - __main__ - INFO - This is a log message
You can use these logs to troubleshoot issues in your applications and infrastructure or integrate with logging tools like ELK, Graylog, or Splunk.

How to manage infrastructure as code with Terraform and Python

Terraform is a popular open-source tool used for infrastructure as code (IaC) automation. It allows you to define, provision, and manage cloud infrastructure resources in a declarative way using configuration files. Terraform supports many cloud platforms, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

While Terraform provides its own configuration language, HCL (HashiCorp Configuration Language), you can also use Python to manage your Terraform code. In this article, we will cover how to manage infrastructure as code with Terraform and Python.

Setting up Terraform and Python

To get started, you will need to install Terraform and Python on your machine. You can download the latest version of Terraform from the official website (https://www.terraform.io/downloads.html) and install it according to the installation instructions for your operating system. You can install Python using your operating system's package manager or download it from the official website (https://www.python.org/downloads/).

Once you have installed Terraform and Python, you can create a new Terraform project and initialize it with the required Terraform providers and modules. For example, the following Terraform code creates an AWS EC2 instance:
provider "aws" {
  region = "us-west-2"
}

resource "aws_instance" "example" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
}
You can save this code in a file called main.tf and run the following command to initialize the Terraform project:
terraform init
Using Python with Terraform

Python can be used to generate, manipulate, and validate Terraform code using various libraries and tools. One popular library for working with Terraform is python-terraform, which provides a Pythonic interface to the Terraform CLI.

To use python-terraform, you can install it using pip:
pip install python-terraform
Once installed, you can create a Python script that uses python-terraform to execute Terraform commands and interact with the Terraform state. For example, the following Python code initializes the Terraform project, applies the configuration, and retrieves the IP address of the EC2 instance:
import terraform

tf = terraform.Terraform(working_dir='./terraform')

tf.init()
tf.apply()

output = tf.output('public_ip')

print(output)
You can also use Python to generate Terraform code dynamically based on various inputs, such as configuration files, user input, or API responses. For example, the following Python code generates a Terraform configuration for an AWS S3 bucket based on a list of bucket names:
buckets = ['bucket1', 'bucket2', 'bucket3']

tf_code = """
provider "aws" {
  region = "us-west-2"
}

{}

"""

bucket_code = """
resource "aws_s3_bucket" "{}" {{
  bucket = "{}"
}}
"""

bucket_configs = [bucket_code.format(name, name) for name in buckets]

full_code = tf_code.format('\n'.join(bucket_configs))

with open('s3.tf', 'w') as f:
  f.write(full_code)
This will generate a Terraform configuration file called s3.tf with the following content:
provider "aws" {
  region = "us-west-2"
}

resource "aws_s3_bucket" "bucket1" {
  bucket = "bucket1"
}

resource "aws_s3_bucket" "bucket2" {
  bucket = "bucket2"
}

resource "aws_s3_bucket" "bucket3" {
  bucket =

Please continue reading DevOps automation using Python - Part 2


Monday, 16 November 2020

React Native for Beginners

November 16, 2020 0

We can make native portable mobile applications utilizing JavaScript and React. It utilizes React to make a rich versatile UI interface. 

Presently, the inquiry strikes a chord, who's utilizing React Native? According to the official site, a huge number of applications are utilizing React Native from set up fortune 500 to up and coming new businesses. A portion of the well known applications that are utilizing React Native incorporate Facebook, Instagram, Bloomberg, Pinterest, Uber, Skype and so on 

Prior to beginning, it's suggested that you have the essential information on JavaScript and React so you can comprehend the React Native applications. 

Setting up the Environment

To begin, we need the underneath application introduced in the framework. 

NodeJS and NPM 

Python

Java SE Advancement Pack (JDK) 

Android SDK, Android Virtual Gadgets 

Visual Studio Code