Nushell for SREs - Modern Shell Scripting for Internal Tools
Site Reliability Engineers (SREs) often live in the command line, juggling Bash scripts, one-liners, and the occasional Go/Python tool to automate tasks. Yet building complex internal CLI tools or automation workflows with traditional shells can be painful and error-prone. Enter Nushell – a modern, cross-platform shell that treats data as structured tables instead of plain text. Nushell (or just nu
) aims to blend the convenience of shell scripting with the robustness of a programming language. In this post, we’ll introduce Nushell and show how it can empower SREs to craft internal CLIs and automation with far less frustration than Bash, while avoiding the overhead of compiled languages. What is Nushell? Nushell is a new type of shell (written in Rust) that draws inspiration from PowerShell, functional programming, and modern CLI tools.
Its key differentiator is that it pipelines structured data (like tables, JSON, etc.) rather than plain text streams. This means commands produce and consume rich data types – you can filter, sort, and transform data without sed/awk hacks. Nushell also offers strong piping, intelligent autocompletion, a real scripting language with variables and loops, and solid error handling (including exceptions) out of the box. In short, it’s designed to make your CLI experience more like writing a program and less like wrestling with text parsing.
Why Traditional Shells Fall Short for SRE Workflows
Before diving deeper into Nushell, let’s clarify the pain points SREs face with traditional shells (Bash, Zsh, etc.) when building internal tools:
- Stringly-Typed Everything: In Bash, everything is text. Command outputs are unstructured strings that require complex parsing. For example, to filter
ls
output by file size, you might pipe toawk
or write a regex – fragile solutions that break with formatting changes. There’s no native concept of “an object” or “a table row”. This makes tasks like parsing JSON or YAML extremely cumbersome (usually invoking external tools likejq
oryq
). - Quoting and Escaping Hell: Anyone who’s written a non-trivial Bash script knows the horror of quoting and escaping. One missed quote or an unexpected space in input can break the entire script. Handling whitespace, special chars, or nested commands often leads to convoluted syntax that’s hard to read and easy to get wrong.
- Poor Error Handling: Bash has no real exceptions. If a command fails, you either check
$?
after each step or useset -e
and pray it exits where it should. Capturing errors or handling them gracefully (e.g. retrying a step) requires manual checks. This is tedious for complex workflows and often ignored, leading to brittle scripts. - Limited Structures and Reusability: Bash offers only simple arrays and associative arrays – no rich data structures. Multi-line scripts become hard to maintain, with tricky function definitions and sourcing files. Reusing code or organizing logic is far harder than in a “real” programming language. As a result, SREs sometimes resort to writing one-off Go or Python programs for tasks that outgrow Bash.
- Autocompletion Setup: While shells like Bash or Zsh can have autocompletion, it typically requires separate configuration or completion scripts. Out of the box, Bash’s completion for custom scripts or commands is minimal. This slows down interactive use, especially when exploring new commands or APIs.
In summary, while Bash is ubiquitous and fine for simple tasks, it becomes a minefield for building robust internal CLIs or advanced automation. This is where Nushell comes in – addressing these issues by design.
Nushell’s Approach: Structured Data and Strong Pipelines
Nushell was built to tackle the very pain points above. Its philosophy is to make the shell data-first and highly composable. Rather than treating output as an opaque string, Nushell understands it as structured data you can query and manipulate. For example, running ls
in Nushell doesn’t give you a text blob - it returns a table where each row is a file, with columns like name, type, size, and modified time.
You can then pipe that table into the next command. In Nushell, commands fall into three categories: those that produce data (sources), those that filter/transform data (filters), and those that consume/output data (sinks. This is an extension of the Unix pipeline idea, but with richer semantics than just stdout/stderr.
- Structured Data Pipelines: Every output can be a structured value (table, list, record, etc.), not just text. You can pipe the output of one command directly into another as data. For example,
ls | where type == "dir"
in Nushell will take the table fromls
and filter it to only directories – no grep or awk needed. You can then pipe that intotable
(which pretty-prints a table) or to another command. This structured pipeline model is reminiscent of PowerShell but in a lightweight, cross-platform shell. - First-Class JSON/YAML/CSV Support: Nushell can parse common data formats on the fly. It has built-in commands to read JSON, YAML, TOML, CSV, etc. directly into structured form. In many cases you don’t even need an explicit parser – for example,
open config.yaml
will detect the.yaml
extension and load the file as structured data. If you have a JSON API response, you can pipe it tofrom json
(if not auto-detected) and immediately start querying fields. This means as an SRE you can dropjq
,awk
, and friends for a lot of tasks and manipulate data directly in the shell. Strong, Flexible Piping: Because of structured pipelines, composition of commands in Nushell is very powerful. You can chain multiple transformations logically. Consider a typical DevOps scenario: you have a CSV log of deployments and want to find the last 5 failures in Europe region. In Nushell you might do:
1 2 3 4 5
open deployments.csv | where status == "FAIL" | where region == "EU" | sort-by timestamp desc | first 5
This one-liner opens a CSV as a table, filters rows twice (by status and region), sorts by timestamp (descending), and then takes the first 5 results. Each
where
andsort-by
operates on the table’s columns by name (e.g.status
,region
,timestamp
). No text parsing or fragile piping between Unix tools – Nushell handles it internally as a proper query pipeline. The result comes out as a neat table of the 5 most recent European failures. This kind of clarity and power in one pipeline is a huge win for SREs dealing with data.- Rich Autocompletion and UX: Nushell provides a modern interactive experience. It has fish-like tab completion for commands, flags, and even data types. When you type a command, Nushell can suggest subcommands or file names. It even knows the structure of many outputs, so you might get intelligent suggestions for fields after a
| where
. For example, afterls | where
, pressing tab could showname
,type
,size
, etc., because it knowsls
produces those columns. This level of context-aware completion makes discovering Nushell commands and crafting pipelines much faster and less error-prone than in Bash. Additionally, Nushell’s prompts and error messages are user-friendly (and configurable). Little touches like syntax highlighting, inline suggestions, and a consistent command syntax all contribute to a more productive CLI experience for daily use. - Built-in Error Handling: One of Nushell’s standout features is treating errors as values that can be caught and handled, similar to exceptions in programming languages. In Bash, if a command errors out, you either ignore it or handle it in a very manual way. Nushell, by contrast, supports
try ... catch
blocks to capture errors from commands and respond to them. For example, if you attempt to change directory to a non-existent path in Nushell script, it would throw an error that you can catch and handle (perhaps by printing a custom message). Starting with Nushell 0.72, you can wrap a command in atry
block and then usecatch
to run fallback logic. This means your automation scripts can be much more robust – you can attempt an action and programmatically react if it fails, rather than just printing a stack trace or halting abruptly. For SREs automating critical tasks (like rolling back a deploy), having proper error handling in the script is invaluable. Real Scripting Language (with Types): Nushell isn’t just an interactive shell; you can write scripts (typically with
.nu
extension) that have functions, variables, conditionals, loops, etc. In fact, Nushell scripts look more like a simple programming language than a series of shell commands. You can define functions usingdef
(and even specify parameter types), useif/else
logic,for
loops or iteration, and more. For instance, here’s a snippet of a Nushell script function illustrating some capabilities:1 2 3 4 5 6 7 8 9 10 11 12 13 14
#!/usr/bin/env nu def deploy-cleanup [namespace?: string] { if ($namespace == null) { print "No namespace provided, exiting" exit 1 } # Get all CrashLoopBackOff pods in the namespace and delete them let pods = (kubectl get pods -n $namespace -o json | from json) let badPods = ($pods | where status.containerStatuses[].state.waiting.reason == "CrashLoopBackOff" | get metadata.name) for $pod in $badPods { echo "Deleting $pod in $namespace..." kubectl delete pod $pod -n $namespace } }
In this hypothetical example, we defined a function
deploy-cleanup
that expects an optionalnamespace
argument (a string). It checks if the namespace was provided, otherwise exits with an error. Then it calls an external command (kubectl
) to get pods as JSON, usesfrom json
to parse it, filters pods where any container status has reason CrashLoopBackOff, extracts their names, and iterates to delete each one. Note how clean and high-level this script is compared to the Bash equivalent – no manual JSON parsing, clear loop and if structures, and use of a variable ($badPods
) to store intermediate results. Nushell scripts can use$env.VAR
for environment variables, let to define variables, and so on. This means you get the maintainability of a “real” language while still staying in the shell realm. Functions can even be defined to alter the environment (usingdef-env
) when needed, similar to Bash’ssource
behavior.
The combination of these features positions Nushell as a powerful tool for SREs. It is trying to hit the sweet spot in between quick-and-dirty Bash scripting and heavy-weight programming in Go/Rust for every little task. You get much more reliability and functionality than a Bash script, but it’s still interactive, dynamic, and easy to iterate on like a shell.
Nushell vs Bash vs Go: Hitting the Sweet Spot
Let’s explicitly compare the approach of using Nushell for internal tools versus the usual alternatives:
- Bash/Unix Shells: Bash is installed everywhere and every SRE knows the basics. It shines for very simple glue logic and one-liners. However, as described, anything beyond trivial quickly becomes unmanageable in Bash. Things like parsing JSON from a REST API or performing complex multi-step data filtering are possible, but you’ll be stringing together multiple external utilities, each with their own syntax (grep, awk, jq, etc.) and handling edge cases for each. Maintenance and readability suffer greatly. In team environments, a long Bash script for an internal tool can become a liability; only the original author truly understands its quirks. Debugging Bash in production scripts is also a headache – error messages are often just a failed command with little context.
- Python/Go (Scripts or Binaries): On the other end, some teams write internal CLIs in a proper programming language like Python or Go for more complex tasks. For example, you might write a Python script to query an API and filter results, or a Go program to manipulate Kubernetes YAML and perform some action. This brings robust libraries and type safety to your aid. However, the trade-off is development time and flexibility. Writing even a small tool in Python/Go involves setting up argument parsing, handling output formatting, compiling (for Go), packaging, etc. It’s overkill if you just need a quick automation for a one-time task or to glue a couple of systems together. Also, these tools are not interactive by nature – you run them with some flags and that’s it. If mid-operation you want to tweak something or inspect an intermediate result, you’re out of luck unless you add REPL capabilities or extensive logging. In short, while compiled or high-level languages are great for production-grade tools, using them for every internal workflow can slow you down.
Nushell positions itself in the middle ground. It aims to give you the interactive, ad-hoc power of a shell with the composability and safety of a programming language. You can start in Nushell’s REPL (read-eval-print loop), experiment with commands to get the output you want, and then easily turn that into a script or alias. You don’t need a full compile/test cycle; changes are immediate and you can iterate quickly. At the same time, Nushell’s structured data and control flow mean your scripts are far more robust than the equivalent Bash. There’s less guesswork (“did that string have a newline at end?” or “which field was that, 3rd or 4th after splitting by space?”) because you are dealing with named columns and proper data types.
By using Nushell for internal CLIs, you get the benefit of quick development (just like writing a shell script) but with far fewer bugs and surprises. And unlike a one-off Go program, a Nushell script can be easily tweaked on the fly by the team when requirements change – you don’t have to rebuild binaries or manage dependencies; Nushell is the only dependency. It’s a very agile approach to automation, well-suited for the dynamic world of site reliability engineering where tasks can be unique and urgent.
Practical Examples: Nushell in Action for SRE Tasks
Let’s walk through some practical examples that demonstrate how SREs can leverage Nushell for internal tooling and automation. These examples are inspired by common scenarios and showcase Nushell’s features.
Example 1: JSON Parsing and Cloud Automation
Scenario: You need to get a list of EC2 instances in AWS that are stopped, and then perhaps start them or report them. Normally, you might use the AWS CLI and then jq
to filter the JSON output, or write a Python script using boto3. With Nushell, this can be done in one pipeline.
Assume you have AWS CLI configured. You can run AWS CLI from Nushell as an external command (Nushell can call any external command seamlessly). The AWS CLI can output JSON, which Nushell can parse. Here’s how you could get all stopped EC2 instances in a region:
1
2
3
4
5
6
# List all stopped EC2 instances in JSON and filter via Nushell
aws ec2 describe-instances --filters "Name=instance-state-name,Values=stopped" --output json \
| from json \
| get Reservations[].Instances[] \
| where State.Name == "stopped" \
| select InstanceId instanceType LaunchTime
Let’s break that down:
aws ec2 describe-instances ... --output json
– this calls the AWS CLI to get instance data in JSON format.| from json
– Nushell takes the JSON text from AWS and parses it into structured data (likely a dictionary with keyReservations
).| get Reservations[].Instances[]
– Here we navigate through the structured data:Reservations
is an array; we get allInstances
from each reservation. This yields a list of instance objects.| where State.Name == "stopped"
– We filter that list to only those instances whose state name is “stopped”. (The AWS JSON has a nested objectState
with fieldName
.)| select InstanceId instanceType LaunchTime
– Finally, we select a few relevant fields for output (instance ID, type, and launch time). Nushell will output a tidy table with those columns for each stopped instance.
This one-liner replaces what might be a multi-step process with jq
or a custom script. You could further pipe this to another command: for example, to automatically start those instances, you could pipe into each
and call the start command:
1
... | get InstanceId | each { aws ec2 start-instances --instance-ids $it }
Here, $it
represents each item (instance ID) passed by the pipeline into the each
block. Nushell’s each
runs the given block for every element of the input list. In this case, it would call the AWS CLI to start each stopped instance. Notice how with Nushell you can mix external commands (aws
CLI) with internal processing (from json
, where
, etc.) fluidly. This kind of mini-CLI for cloud automation can be written in minutes and saves a ton of typing and context-switching between shell and JSON tools.
Example 2: Simplifying Kubernetes Workflow
Scenario
Your platform team manages Kubernetes clusters, and you frequently need to inspect or modify Kubernetes resource manifests (YAML files). You also sometimes need to perform bulk operations, like deleting all pods in CrashLoopBackOff as we sketched earlier. With Nushell’s YAML support and scripting, these tasks become trivial.
Inspecting a Kubernetes YAML
Suppose you have a deployment.yaml
and you want to quickly check the image tags of all containers in the deployment spec. In Nushell, you could do:
1
open deployment.yaml | get spec.template.spec.containers[].image
What happens here is open deployment.yaml
reads the YAML file and parses it (since it recognizes .yaml
extension). The get spec.template.spec.containers[].image
then navigates the nested structure: it drills down into spec -> template -> spec -> containers
(which is a list), then for each container it extracts the image
field.
The output will be a list of image strings (for example, it might list myservice:1.2.3
, sidecar:latest
, etc.). This took one command – no need to write a Python script or use yq
to pluck values.
You can just as easily edit values (Nushell has an update
command to modify data in place). For instance, you could update the image tag by piping into an update
and then output back to YAML:
1
2
3
open deployment.yaml \
| update spec.template.spec.containers[0].image "myservice:1.2.4" \
| to yaml > deployment_updated.yaml
In the above, update
would create a new structure with the first container’s image field set to a new tag, and to yaml
serializes it back to YAML text which we redirect to a file. This demonstrates how Nushell can function almost like sed
for structured data – performing an in-place change inside a YAML/JSON.
Kubernetes Bulk Operation Script
Now, consider the task of cleaning up CrashLoopBackOff pods we discussed. In Bash, you’d parse kubectl get pods
output or JSON, loop, etc., which is doable but clunky. With Nushell, you can write a script kclean.nu
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/usr/bin/env nu
# kclean.nu: Delete all CrashLoopBackOff pods in a given namespace
def main [namespace: string] {
let pods = (kubectl get pods -n $namespace -o json | from json | get items)
let badpods = ($pods | where status.containerStatuses[].state.waiting.reason == "CrashLoopBackOff" | get metadata.name)
if ($badpods | length) == 0 {
print "No CrashLoopBackOff pods in namespace ($namespace)."
exit 0
}
for $p in $badpods {
print "Deleting pod ($p)..."
kubectl delete pod $p -n $namespace
}
}
Example 3: Quick CSV Data Crunching
Scenario
You have a CSV file with service latency metrics (perhaps exported from a monitoring system). You want to quickly see, say, the top 5 highest latency entries for a certain endpoint and maybe output some summary. This kind of ad-hoc analysis is often done in Excel or with Python/Pandas, but Nushell can handle surprisingly complex queries directly.
Suppose metrics.csv
has columns: service
, endpoint
, response_time_ms
, status_code
. We can use Nushell to find, for example, the slowest 5 responses for a given service and endpoint:
1
2
3
4
5
6
open metrics.csv \
| where service == "order-service" \
| where endpoint == "/api/v1/checkout" \
| sort-by response_time_ms desc \
| first 5 \
| to json
Explanation
open metrics.csv
parses the CSV into a table (columns become accessible by name).- Two
where
filters narrow down to rows matching service"order-service"
and a specific endpoint. sort-by response_time_ms desc
sorts by response time in descending order (so highest latencies first).first 5
takes the first 5 rows of that sorted result (the top 5 latencies).to json
then converts those 5 rows to JSON output.
If you run this in the Nushell REPL, up to the first 5
you would see a nicely formatted table of the 5 slowest checkout requests. By adding to json
, we demonstrate that Nushell can act as a data conversion tool (CSV to JSON in this case) while filtering.
This could be part of a larger automation – for instance, maybe you feed this JSON to a monitoring API or just save it for a report.
The point is, in a few chained commands we achieved what normally might require a Python script with CSV parsing. The readability of the pipeline makes it easy to adjust (change the service name, or sort by a different field) on the fly. This encourages exploration and quick data debugging, which is very much in the realm of what SREs do (e.g., “find me all error responses over 500ms in the last hour of logs” – a task Nushell can handle elegantly with the right query).
Example 4: Building an Internal CLI with Nushell Scripts
Scenario
Your team wants a small CLI tool called platform-cli
to help with common internal tasks like checking system status, clearing caches, scaling services, etc. Typically, you might reach for a Go library like Cobra to build a CLI, or use Python’s argparse. But Nushell scripts can serve as a quick way to implement such CLI commands without a full compile/test cycle.
One approach is to create a directory of Nushell scripts, one per subcommand, and a main script to dispatch. For example:
1
2
3
4
platform-cli.nu # main entry
platform-cli-status.nu # script to show status
platform-cli-clear-cache.nu # script to clear cache
platform-cli-scale.nu # script to scale a service
Each script could be written in Nushell and take parameters. The main platform-cli.nu
could dispatch based on an argument (or you could even just rely on naming and call the appropriate script directly).
Here’s a flavor of what one of these might look like, say platform-cli-status.nu
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# platform-cli-status.nu
def main [] {
# Example: aggregate status from two sources
let dbStatus = (./check_database.sh | from json) # call an external script that returns JSON
let apiStatus = (http get https://internal-api.company.com/health | from json)
if $dbStatus.up == $false {
print "Database is DOWN (last error: $dbStatus.last_error)"
} else {
print "Database is up. Connections: $dbStatus.connections"
}
if $apiStatus.status != "OK" {
print "API health check failed: ($apiStatus.message)"
exit 2
} else {
print "API is healthy (latency: $apiStatus.latency_ms ms)"
}
}
Key Points
- You can call external programs or scripts (
check_database.sh
) and even parse their output if they return JSON. - You can use Nushell’s built-in
http get
command to directly fetch an API (in this case, an internal health endpoint) and parse that JSON. - We use simple
if/else
logic to print status messages and set an exit code (exit 2
) if something is wrong, so that the CLI can return a proper code to the caller or monitoring system. - We leverage Nushell’s expressions to access fields (
$dbStatus.up
,$apiStatus.status
) and interpolate variables in output strings.
This small script could be invoked via the main platform-cli.nu
or simply run on its own. Because it’s Nushell, if tomorrow you decide to change the source of truth for status (maybe use a different API or add more checks), it’s a quick edit – no compilation.
Over time, you might accumulate a suite of Nushell scripts covering various internal tasks, effectively building a toolkit for your platform. Each script remains short and focused, and Nushell’s consistency means once an engineer learns one script, writing another is straightforward.
This Nushell script expects a namespace as argument. It uses kubectl get pods -o json
and parses it. The JSON from Kubernetes lists pods under an items
array, which we extract. Then we filter that list with a where
to find pods whose containerStatuses[].state.waiting.reason
is CrashLoopBackOff
. (This checks each container status for a waiting state reason). We get the pod names and store in $badpods
.
If the list is empty, we print a message and exit. Otherwise, we loop through each pod name and delete it with kubectl delete pod
.
Key Points
- We effortlessly handle JSON output from an external command (
kubectl
) and query deep into it withwhere
– no need forjq
or writing a separate program. This saves time and reduces the number of tools we depend on. - The script uses high-level constructs (
if
,for
,let
) making it easy to read and maintain. An SRE on the team who knows basic Nushell can understand this script quickly, whereas an equivalent Bash script withjq
and loops might be far more cryptic. - We can distribute
kclean.nu
to the team, and as long as everyone has Nushell, they can run it withnu kclean.nu <namespace>
or even make it executable and put it inPATH
. Because Nushell is cross-platform, this same script works on Linux, Mac, or Windows environments (assumingkubectl
is available) – great for teams that have mixed OS usage.
Nushell in Nix and Devbox Environments
Many SRE teams have started adopting tools like Nix (for reproducible builds and environments) or Devbox (for consistent dev environments) to manage their toolchain. Nushell fits particularly well in such setups:
- Easy Installation and Reproducibility: Nushell is available in most package managers, including Homebrew, apt, and Nix. In fact, the Nushell project provides a status badge for packaging across platforms. With Nix, you can pin a specific Nushell version in your
shell.nix
orflake.nix
, ensuring every engineer on the team and your CI pipelines use the exact same Nushell version. This eliminates the “works on my machine” problem for shell scripts. You can, for example, add nushell to your Devbox configuration (devbox.json
orflake
) and on devbox shell everyone gets Nushell installed automatically. - Isolation and Configurability: In Nix/Devbox, Nushell can be your interactive shell in development environments. Because Nushell can read from config files (like
config.nu
) and you can check those into your repo, you can predefine aliases, prompt customization, or helper functions that are available to everyone. For instance, you might aliasplatform-cli
tonu /path/to/scripts/platform-cli.nu
in the team dev environment, so that folks can just typeplatform-cli status
from Nushell and it runs the right script. The reproducible nature of these envs means the Nushell experience is consistent across Mac/Linux and across different engineers’ machines. - Using Nushell in CI: Nix and similar tools can also be used in CI pipelines. Since Nushell is just a binary, you can include it in CI images or Devbox setups and use it to write CI tasks. For example, instead of writing a Bash script in a GitHub Action, you could run a Nushell script for better reliability. (There’s even a setup-nu action to easily install Nushell in GitHub Actions.) This could be used to do things like validate config files (Nushell can quickly parse a JSON or YAML and check for required fields), deploy components, or aggregate logs in a workflow – all with the same Nushell scripts you use locally.
In summary, if your organization values reproducibility and uses tools like Nix, adding Nushell into the mix is a no-brainer. It’s just another package that brings a lot of capabilities. SREs working in ephemeral Devbox shells will appreciate having Nushell available for both interactive use and running the team’s automation scripts.
Conclusion: A Shell Worth Trying for the Modern SRE
Nushell is a bold reimagination of the shell, transforming the way we handle data and automation in the terminal. For SREs and DevOps engineers, it hits a sweet spot: more powerful and safer than Bash, yet more lightweight and iterative than writing full-blown applications in Go or Python. By using structured data pipelines, Nushell removes much of the tedium and brittleness associated with shell scripting. Common tasks like parsing JSON, manipulating config files, or gluing together system commands become simpler and more reliable. The ability to use Nushell both as an interactive shell and a scripting language means you can interactively develop your one-liners into scripts, fostering a very rapid development cycle for internal tools.
Yes, Nushell comes with a learning curve – it’s a different syntax and mindset than traditional POSIX shells. But the learning investment pays off quickly. You’ll find your scripts are more self-documenting (because of clear pipeline stages), and you’ll catch errors earlier (thanks to type checking and error handling). Autocompletion and helpful error messages make the experience pleasant, something rarely said about shell work. As Nushell matures (approaching 1.0 stability), it’s becoming increasingly viable as a daily driver shell. And because it’s open source and written in Rust, it’s both performant and being actively improved by a community.
In my (opinionated) view, Nushell is an ideal tool for SREs looking to up their automation game. It reduces the gap between quick scripts and maintainable tools. Instead of accumulating hard-to-maintain Bash scripts or over-architecting a simple task in Go, you can reach for Nushell and get the job done elegantly. Whether it’s querying cloud resources, massaging data from APIs, orchestrating system commands, or building a small CLI for your team, Nushell has you covered with a single consistent environment. It’s especially powerful in a world of Kubernetes and cloud where JSON/YAML are everywhere – Nushell speaks those natively, turning your terminal into a flexible dashboard.
Don’t just take my word for it – give Nushell a try in your next internal project. Install it (e.g. via Homebrew or Nix), spend an afternoon with the Nushell book or cookbook examples, and experiment with replacing a Bash script or two. Try writing a Nushell script to solve a problem you recently tackled in Bash or Python. You’ll likely be surprised at how succinctly you can express the solution, and how much more readable it is later. As with any tool, Nushell isn’t a silver bullet for everything, but for a large class of SRE tasks, it drastically improves the developer experience. At the very least, it will change how you think about piping data in the shell. And who knows – Nushell might just become your team’s secret weapon for fast, reliable internal automation. Happy hacking!