Saturday, November 29, 2025

Knowledge Cleansing on the Command Line for Newbie Knowledge Scientists


Knowledge Cleansing on the Command Line for Newbie Knowledge Scientists
Picture by Writer

 

Introduction

 
When you concentrate on information cleansing, you in all probability consider spinning up a Jupyter Pocket book. However here is one thing that may shock you: a number of the strongest information cleansing can occur proper in your terminal, utilizing instruments which are already put in in your system.

On this article, you’ll discover ways to use fundamental command-line utilities to scrub, remodel, and discover information information. No installations are required; simply your terminal and a few CSV information.

Earlier than we get began cleansing information on the command line, let’s discuss why this issues:

  • Command-line instruments are straightforward to make use of, quick, and environment friendly, particularly for big information.
  • These instruments are constructed into Linux/macOS and can be found on Home windows.
  • They’re nice for getting a primary take a look at information earlier than loading and analyzing it with Python.
  • It’s straightforward to chain instructions collectively in scripts and use these scripts for automation.

Now, let’s begin coding!

Be aware: Yow will discover all of the instructions on this Bash script on GitHub. To get essentially the most out of this text, I encourage you to open your terminal and code alongside.

 

Setting Up Pattern Knowledge

 
Let’s create a messy CSV file to work with. This simulates real-world information points you’ll doubtless run into.

cat > messy_data.csv << 'EOF'
title,age,wage,division,e-mail
John Lee,32,50000,Engineering,john@instance.com
Jane Smith,28,55000,Advertising,jane@instance.com
   Bob Davis    ,35,60000,Engineering,bob@instance.com
Alice Williams,29,,Advertising,alice@instance.com
Charlie Brown,45,70000,Gross sales,charlie@instance.com
Dave Wilson,31,52000,Engineering,
Emma Davis,,58000,Advertising,emma@instance.com
Frank Miller,38,65000,Gross sales,frank@instance.com
John Lee,32,50000,Engineering,john@instance.com
Grace Lee,27,51000,Engineering,grace@instance.com
EOF

 

This dataset has a number of frequent points: main and trailing whitespace, lacking values, and duplicate rows. It’s nice for studying!

 

1. Exploring Your Knowledge with head, tail, and wc

 
Earlier than cleansing your information, you have to perceive what you’re working with. Let’s begin with the fundamentals.

# See the primary 5 rows (together with header)
head -n 5 messy_data.csv

# See the final 3 rows
tail -n 3 messy_data.csv

# Depend complete rows (together with header)
wc -l messy_data.csv

 

Here’s what is going on:

  • head -n 5 exhibits the primary 5 strains, supplying you with a fast preview.
  • tail -n 3 exhibits the final 3 strains (helpful for checking if information is full).
  • wc -l counts strains — subtract 1 for the header to get your report rely.

Output:

title,age,wage,division,e-mail
John Lee,32,50000,Engineering,john@instance.com
Jane Smith,28,55000,Advertising,jane@instance.com
   Bob Davis    ,35,60000,Engineering,bob@instance.com
Alice Williams,29,,Advertising,alice@instance.com
Frank Miller,38,65000,Gross sales,frank@instance.com
John Lee,32,50000,Engineering,john@instance.com
Grace Lee,27,51000,Engineering,grace@instance.com
11 messy_data.csv

 

2. Viewing Particular Columns with reduce

 
You don’t at all times must see all of the columns. Let’s extract solely the names and departments.

reduce -d',' -f1,4 messy_data.csv

 

Breaking it down:

  • reduce is a instrument for extracting sections from every line.
  • -d',' units the delimiter to a comma (for CSV information).
  • -f1,4 selects fields (columns) 1 and 4.
  • You can even use ranges: -f1-3 for columns 1 by means of 3.

Right here is the output:

title,division
John Lee,Engineering
Jane Smith,Advertising
   Bob Davis    ,Engineering
Alice Williams,Advertising
Charlie Brown,Gross sales
Dave Wilson,Engineering
Emma Davis,Advertising
Frank Miller,Gross sales
John Lee,Engineering
Grace Lee,Engineering

 

3. Eradicating Duplicate Rows with type and uniq

 
Discover that “John Lee” seems twice in our dataset. Let’s repair that.

# Save the header first
head -n 1 messy_data.csv > cleaned_data.csv

# Take away duplicates from the information (excluding header)
tail -n +2 messy_data.csv | type | uniq >> cleaned_data.csv

 

Here’s what every command does: head -n 1 grabs simply the header row. tail -n +2 will get all the pieces ranging from line 2 (skipping the header). Then, type kinds the strains. Please observe that uniq solely works on sorted information, and uniq removes adjoining duplicate strains. Lastly, >> appends to the file (versus > which overwrites).

 

4. Looking and Filtering with grep

 
Let’s now do some looking and filtering operations. Wish to discover all engineers or filter out rows with lacking information? grep turns out to be useful for all such duties.

# Discover all engineers
grep "Engineering" messy_data.csv

# Discover rows with empty fields (two consecutive commas)
grep ",," messy_data.csv

# Exclude rows with lacking information
grep -v ",," messy_data.csv > no_missing.csv

 

Right here, grep "sample" searches for strains containing that sample. grep -v inverts the match (exhibits strains that DO NOT match). This can be a fast option to filter out incomplete data, offered the lacking worth leads to a double comma (,,).

 

5. Trimming Whitespace with sed

 
See how the report of “Bob Davis” has further areas? Let’s clear that up.

sed 's/^[ t]*//; s/[ t]*$//' messy_data.csv > trimmed_data.csv

 

Now let’s perceive the command: sed is a stream editor for textual content transformation. s/sample/alternative/ is the substitution syntax. ^[ t]* matches areas/tabs firstly of a line. [ t]*$ matches areas/tabs on the finish of a line. The semicolon separates two operations (trim the road begin, then trim the road finish).

 

6. Changing Values with sed

 
Typically you have to standardize values or repair typos. Let’s attempt to substitute all occurrences of “Engineering” with “Tech”.

# Change all "Engineering" with "Tech"
sed 's/Engineering/Tech/g' messy_data.csv

 

Subsequent, let’s fill empty e-mail fields (denoted by a comma on the finish of the road) with a default e-mail worth.

# Change empty e-mail fields with "no-email@instance.com"
sed 's/,$/,no-email@instance.com/' messy_data.csv

 

Run the above instructions and observe the output. I’ve excluded the output right here to keep away from being repetitive.

Breaking it down:

  • The g flag means “international” — substitute all occurrences on every line.
  • ,$ matches a comma on the finish of a line (indicating an empty final area).
  • You may chain a number of replacements with ; between them.

 

7. Counting and Summarizing with awk

 
awk is tremendous helpful for field-based operations. Let’s do some fundamental evaluation.

# Depend data by division
tail -n +2 messy_data.csv | reduce -d',' -f4 | type | uniq -c

# Calculate common age (excluding header and empty values)
tail -n +2 messy_data.csv | awk -F',' '{if($2) sum+=$2; if($2) rely++} END {print "Common age:", sum/rely}'

 

On this awk command, -F',' units the sector separator to a comma, and $2 refers back to the second area (age). The situation if($2) ensures solely non-empty values are processed, whereas sum += $2 accumulates the entire. Lastly, the END block executes in spite of everything strains are learn to calculate and print the typical age.

Output:

      5 Engineering
      3 Advertising
      2 Gross sales
Common age: 33

 

8. Combining Instructions with Pipes

 
You get extra helpful processing whenever you chain these command-line instruments collectively.

# Get distinctive departments, sorted alphabetically
tail -n +2 messy_data.csv | reduce -d',' -f4 | type | uniq

# Discover engineers with wage > 55000
tail -n +2 messy_data.csv | grep "Engineering" | awk -F',' '$3 > 55000' | reduce -d',' -f1,3

# Depend staff per division with counts
tail -n +2 messy_data.csv | reduce -d',' -f4 | type | uniq -c | type -rn

 

Right here, every | passes the output of 1 command as enter to the following. This allows you to construct advanced information transformations step-by-step. The final step kinds by rely in reverse numerical order (-rn).

This outputs:

Engineering
Advertising
Gross sales
   Bob Davis    ,60000
      5 Engineering
      3 Advertising
      2 Gross sales

 

9. Changing Knowledge Codecs

 
Typically you have to work with completely different delimiters. Right here, we attempt to use a tab because the separator as a substitute of a comma.

# Convert CSV to TSV (tab-separated)
sed 's/,/t/g' messy_data.csv > information.tsv

# Add a brand new column with a set worth
awk -F',' 'BEGIN{OFS=","} {print $0, "2024"}' messy_data.csv > data_with_year.csv

 

On this awk command, BEGIN{OFS=","} units the output area separator to a comma. $0 represents your entire enter line, and print $0, "2024" appends “2024” as a brand new column to every line of output.

 

10. A Full Cleansing Pipeline

 
Let’s put all of it collectively into one helpful command that cleans our messy information:

# Save header
head -n 1 messy_data.csv > final_clean.csv

# Clear the information: take away duplicates, trim whitespace, exclude lacking values
tail -n +2 messy_data.csv | 
  sed 's/^[ t]*//; s/[ t]*$//' | 
  grep -v ",," | 
  type | 
  uniq >> final_clean.csv

echo "Cleansing full! Examine final_clean.csv"

 

This pipeline first saves the header to protect the column names, then skips it whereas processing the information rows. It trims main and trailing whitespace from every line, removes any rows containing empty fields (particularly double commas), kinds the information, and eliminates duplicate entries. Lastly, it appends the cleaned information to the output file.

 

Conclusion

 
Command-line information cleansing is a strong but underrated ability for information scientists. These instruments are quick and dependable. Whereas you’ll nonetheless use Python for advanced evaluation, mastering these fundamentals will make you extra environment friendly and offer you choices when Python is not splendid.

One of the best half is that these expertise switch to information engineering, DevOps, and system administration roles. Studying to control information on the command line makes you a extra versatile developer.

Begin training with your individual datasets, and you’ll be shocked how usually you attain for these instruments as a substitute of spinning up Python notebooks. Pleased information cleansing!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embrace DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and occasional! At the moment, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.



Related Articles

Latest Articles