Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
Oops! Something went wrong while submitting the form.
X
x
Shebang Your Shell Commands with GenAI using AWS Bedrock
Sagar Barai
Cloud & DevOps
Tags:
No items found.
Generative AI (GenAI) is no longer a mystery—it's been around for over two years now. Developers are leveraging GenAI for a wide range of tasks: writing code, handling customer queries, powering RAG pipelines for data retrieval, generating images and videos from text, and much more.
In this blog post, we’ll integrate an AI model directly into the shell, enabling real-time translation of natural language queries into Linux shell commands—no more copying and pasting from tools like ChatGPT or Google Gemini. Even if you're a Linux power user who knows most commands by heart, there are always moments when a specific command escapes you. We'll use Amazon Bedrock, a fully managed serverless service, to run inferences with the model of our choice. For development and testing, we’ll start with local model hosting using Ollama and Open WebUI. Shell integration examples will cover both Zsh and Bash.
Setting up Ollama and OpenWebUI for prompt testing
By default, Ollama listens on port 11434. If you're comfortable without a user interface like ChatGPT, you can start sending prompts directly to the /api/generate endpoint using tools like curl or Postman. Alternatively, you can run a model from the shell using:
This starts the Open WebUI on the default port 8080. Open your favorite web browser and navigate to http://localhost:8080/. Set an initial username and password. Once configured, you’ll see an interface similar to ChatGPT. You can choose your model from the dropdown in the top-left corner.
Testing the prompt in Open-WebUI and with API calls:
Goal:
User types a natural language query
Model receives the input and processes it
Model generates a structured JSON output
The shell replaces the original query with the actual command
Why Structured Output Instead of Plain Text?
You might wonder—why not just instruct the model to return a plain shell command with strict prompting rules? During testing, we observed that even with rigid prompt instructions, the model occasionally includes explanatory text. This often happens when the command in question could be dangerous or needs caution.
For instance, the dd command can write directly to disk at a low level. Models like Mistral or Llama may append a warning or explanation along with the command to prevent accidental misuse. Using structured JSON helps us isolate the actual command cleanly, regardless of any extra text the model may generate.
Note: Ensure that smart quotes (‘’) are not used in your actual command—replace them with straight quotes ('') to avoid errors in the terminal.
This allows you to interact with the model programmatically, bypassing the UI and integrating the prompt into automated workflows or CLI tools.
Setting up AWS Bedrock Managed Service
Login to the AWS Console and navigate to the Bedrock service.
Under Foundation Models, filter by Serverless models.
Subscribe to a model that suits code generation use cases. For this blog, I’ve chosen Anthropic Claude 3.7 Sonnet, known for strong code generation capabilities. Alternatively, you can go with Amazon Titan or Amazon Nova models, which are more cost-effective and often produce comparable results.
Configure Prompt Management
1. Once subscribed, go to the left sidebar and under Builder Tools, click on Prompt Management.
2. Click Create prompt and give it a name—e.g., Shebang-NLP-TO-SHELL-CMD.
3. In the next window:
Expand System Instructions and paste the structured prompt we tested earlier (excluding the <Query Here> placeholder).
In the User Message, enter {{question}} — this will act as a placeholder for the user's natural language query.
4. Under Generative AI Resource, select your subscribed model.
5. Leave the randomness and diversity settings as default. You may reduce the temperature slightly to get more deterministic responses, depending on your needs.
6. At the bottom of the screen, you should see the question variable under the Test Variables section. Add a sample value like: `list all docker containers`
7. Click Run. You should see the structured JSON response on the right pane.
8. If the output looks good, click Create Version to save your tested prompt.
Setting Up a “Flow” in AWS Bedrock
1. From the left sidebar under Builder Tools, click on Flows.
2. Click the Create Flow button.
Name your flow (e.g., ShebangShellFlow).
Keep the "Create and use a new service role" checkbox selected.
Click Create flow.
Once created, you’ll see a flow graph with the following nodes:
Flow Input
Prompts
Flow Output
Configure Nodes
Click on the Flow Input and Flow Output nodes. Note down the Node Name and Output Name (default: FlowInputNode and document, respectively).
Click on the Prompts node, then in the Configure tab on the left:
Select "Use prompt from prompt management"
From the Prompt dropdown, select the one you created earlier.
Choose the latest Version of the prompt.
Click Save.
Test the Flow
You can now test the flow by providing a sample natural language input like:
`list all docker containers`
Finalizing the Flow
1. Go back to the Flows list and select the flow you just created.
2. Note down the Flow ID or ARN.
3. Click Publish Version to create the first version of your flow.
4. Navigate to the Aliases tab and click Create Alias:
Name your alias (e.g., prod or v1).
Choose "Use existing version to associate this alias".
From the Version dropdown, select Version 1. Click Create alias.
5. After it's created, click on the new alias under the Aliases tab and note the Alias ARN—you'll need this when calling the flow programmatically.
Shell Integration for ZSH and BASH
Configuring IAM Policy
To use the Bedrock flow from your CLI, you need a minimal IAM policy as shown below:
To simplify request signing (e.g., AWS SigV4), language-specific SDKs are available. For this example, we use the AWS SDK v3 for JavaScript and the InvokeFlowCommand from the @aws-sdk/client-bedrock-agent-runtime package:
You'll need to substitute the following values in your SDK/API calls:
flowIdentifier: ID or ARN of the Bedrock flow
flowAliasIdentifier: Alias ARN of the flow version
nodeName: Usually FlowInputNode
content.document: Natural language query
nodeOutputName: Usually document
Shell Script Integration
The Node.js script reads a natural language query from standard input (either piped or redirected) and invokes the Bedrock flow accordingly. You can find the full source code of this project in the GitHub repo: 🔗 https://github.com/azadsagar/ai-shell-helper
Environment Variables
To keep the script flexible across local and cloud-based inference, the following environment variables are used:
Set INFERENCE_MODE to ollama if you want to use a locally hosted model.
Configure ZSH/BASH shell to perform magic - Shebang
When you type in a Zsh shell, your input is captured in a shell variable called LBUFFER. This is a duplex variable—meaning it can be read and also written back to. Updating LBUFFER automatically updates your shell prompt in place.
In the case of Bash, the corresponding variable is READLINE_LINE. However, unlike Zsh, you must manually update the cursor position after modifying the input. You can do this by calculating the string length using ${#READLINE_LINE} and setting the cursor accordingly. This ensures the cursor moves to the end of the updated line.
From Natural Language to Shell Command
Typing natural language directly in the shell and pressing Enter would usually throw a “command not found” error. Instead, we’ll map a shortcut key to a shell function that:
Captures the input (LBUFFER for Zsh, READLINE_LINE for Bash)
Sends it to a Node.js script via standard input
Replaces the shell line with the generated shell command
Zsh Integration Example
In Zsh, you must register the shell function as a Zsh widget, then bind it to a shortcut using bindkey.
Shebang Your Shell Commands with GenAI using AWS Bedrock
Generative AI (GenAI) is no longer a mystery—it's been around for over two years now. Developers are leveraging GenAI for a wide range of tasks: writing code, handling customer queries, powering RAG pipelines for data retrieval, generating images and videos from text, and much more.
In this blog post, we’ll integrate an AI model directly into the shell, enabling real-time translation of natural language queries into Linux shell commands—no more copying and pasting from tools like ChatGPT or Google Gemini. Even if you're a Linux power user who knows most commands by heart, there are always moments when a specific command escapes you. We'll use Amazon Bedrock, a fully managed serverless service, to run inferences with the model of our choice. For development and testing, we’ll start with local model hosting using Ollama and Open WebUI. Shell integration examples will cover both Zsh and Bash.
Setting up Ollama and OpenWebUI for prompt testing
By default, Ollama listens on port 11434. If you're comfortable without a user interface like ChatGPT, you can start sending prompts directly to the /api/generate endpoint using tools like curl or Postman. Alternatively, you can run a model from the shell using:
This starts the Open WebUI on the default port 8080. Open your favorite web browser and navigate to http://localhost:8080/. Set an initial username and password. Once configured, you’ll see an interface similar to ChatGPT. You can choose your model from the dropdown in the top-left corner.
Testing the prompt in Open-WebUI and with API calls:
Goal:
User types a natural language query
Model receives the input and processes it
Model generates a structured JSON output
The shell replaces the original query with the actual command
Why Structured Output Instead of Plain Text?
You might wonder—why not just instruct the model to return a plain shell command with strict prompting rules? During testing, we observed that even with rigid prompt instructions, the model occasionally includes explanatory text. This often happens when the command in question could be dangerous or needs caution.
For instance, the dd command can write directly to disk at a low level. Models like Mistral or Llama may append a warning or explanation along with the command to prevent accidental misuse. Using structured JSON helps us isolate the actual command cleanly, regardless of any extra text the model may generate.
Note: Ensure that smart quotes (‘’) are not used in your actual command—replace them with straight quotes ('') to avoid errors in the terminal.
This allows you to interact with the model programmatically, bypassing the UI and integrating the prompt into automated workflows or CLI tools.
Setting up AWS Bedrock Managed Service
Login to the AWS Console and navigate to the Bedrock service.
Under Foundation Models, filter by Serverless models.
Subscribe to a model that suits code generation use cases. For this blog, I’ve chosen Anthropic Claude 3.7 Sonnet, known for strong code generation capabilities. Alternatively, you can go with Amazon Titan or Amazon Nova models, which are more cost-effective and often produce comparable results.
Configure Prompt Management
1. Once subscribed, go to the left sidebar and under Builder Tools, click on Prompt Management.
2. Click Create prompt and give it a name—e.g., Shebang-NLP-TO-SHELL-CMD.
3. In the next window:
Expand System Instructions and paste the structured prompt we tested earlier (excluding the <Query Here> placeholder).
In the User Message, enter {{question}} — this will act as a placeholder for the user's natural language query.
4. Under Generative AI Resource, select your subscribed model.
5. Leave the randomness and diversity settings as default. You may reduce the temperature slightly to get more deterministic responses, depending on your needs.
6. At the bottom of the screen, you should see the question variable under the Test Variables section. Add a sample value like: `list all docker containers`
7. Click Run. You should see the structured JSON response on the right pane.
8. If the output looks good, click Create Version to save your tested prompt.
Setting Up a “Flow” in AWS Bedrock
1. From the left sidebar under Builder Tools, click on Flows.
2. Click the Create Flow button.
Name your flow (e.g., ShebangShellFlow).
Keep the "Create and use a new service role" checkbox selected.
Click Create flow.
Once created, you’ll see a flow graph with the following nodes:
Flow Input
Prompts
Flow Output
Configure Nodes
Click on the Flow Input and Flow Output nodes. Note down the Node Name and Output Name (default: FlowInputNode and document, respectively).
Click on the Prompts node, then in the Configure tab on the left:
Select "Use prompt from prompt management"
From the Prompt dropdown, select the one you created earlier.
Choose the latest Version of the prompt.
Click Save.
Test the Flow
You can now test the flow by providing a sample natural language input like:
`list all docker containers`
Finalizing the Flow
1. Go back to the Flows list and select the flow you just created.
2. Note down the Flow ID or ARN.
3. Click Publish Version to create the first version of your flow.
4. Navigate to the Aliases tab and click Create Alias:
Name your alias (e.g., prod or v1).
Choose "Use existing version to associate this alias".
From the Version dropdown, select Version 1. Click Create alias.
5. After it's created, click on the new alias under the Aliases tab and note the Alias ARN—you'll need this when calling the flow programmatically.
Shell Integration for ZSH and BASH
Configuring IAM Policy
To use the Bedrock flow from your CLI, you need a minimal IAM policy as shown below:
To simplify request signing (e.g., AWS SigV4), language-specific SDKs are available. For this example, we use the AWS SDK v3 for JavaScript and the InvokeFlowCommand from the @aws-sdk/client-bedrock-agent-runtime package:
You'll need to substitute the following values in your SDK/API calls:
flowIdentifier: ID or ARN of the Bedrock flow
flowAliasIdentifier: Alias ARN of the flow version
nodeName: Usually FlowInputNode
content.document: Natural language query
nodeOutputName: Usually document
Shell Script Integration
The Node.js script reads a natural language query from standard input (either piped or redirected) and invokes the Bedrock flow accordingly. You can find the full source code of this project in the GitHub repo: 🔗 https://github.com/azadsagar/ai-shell-helper
Environment Variables
To keep the script flexible across local and cloud-based inference, the following environment variables are used:
Set INFERENCE_MODE to ollama if you want to use a locally hosted model.
Configure ZSH/BASH shell to perform magic - Shebang
When you type in a Zsh shell, your input is captured in a shell variable called LBUFFER. This is a duplex variable—meaning it can be read and also written back to. Updating LBUFFER automatically updates your shell prompt in place.
In the case of Bash, the corresponding variable is READLINE_LINE. However, unlike Zsh, you must manually update the cursor position after modifying the input. You can do this by calculating the string length using ${#READLINE_LINE} and setting the cursor accordingly. This ensures the cursor moves to the end of the updated line.
From Natural Language to Shell Command
Typing natural language directly in the shell and pressing Enter would usually throw a “command not found” error. Instead, we’ll map a shortcut key to a shell function that:
Captures the input (LBUFFER for Zsh, READLINE_LINE for Bash)
Sends it to a Node.js script via standard input
Replaces the shell line with the generated shell command
Zsh Integration Example
In Zsh, you must register the shell function as a Zsh widget, then bind it to a shortcut using bindkey.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms