297

I want to create a near 100% load on a Linux machine. It's quad core system and I want all cores going full speed. Ideally, the CPU load would last a designated amount of time and then stop. I'm hoping there's some trick in bash. I'm thinking some sort of infinite loop.

dkb
  • 3,238
  • 3
  • 26
  • 41
User1
  • 35,366
  • 60
  • 170
  • 251

23 Answers23

379

I use stress for this kind of thing, you can tell it how many cores to max out.. it allows for stressing memory and disk as well.

Example to stress 2 cores for 60 seconds

stress --cpu 2 --timeout 60

artbristol
  • 30,694
  • 5
  • 61
  • 93
David
  • 4,116
  • 1
  • 14
  • 5
  • 7
    On Fedora, `sudo yum install stress` – Christopher Markieta Nov 13 '14 at 22:16
  • 3
    You need to `EPEL` repo for CentOS `wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm` – Satish Dec 16 '14 at 17:50
  • 5
    [`brew`](http://brew.sh/) `install stress` on OS X. Also for some reason I had to specify 8 cores on a quad-core MBPr – fregante Mar 08 '15 at 14:28
  • 1
    @bfred.it Your cores may utilize hyperthreading, effectively doubling your amount of cores (4 physical and 4 virtual cores). You'll also want to stress the virtual ones for a full load test. – Mast May 21 '15 at 11:53
  • 6
    `sudo apt-get install stress` on debian based systems, for completeness. Used this to test a cooling mod on the [Intel i7 NUC Kit](http://www.intel.com/content/www/us/en/nuc/nuc-kit-nuc5i7ryh.html). – onmylemon Nov 26 '15 at 13:30
  • CentOS 7: #wget dag.wieers.com/redhat/el7/en/x86_64/dag/RPMS/stress-1.0.2-1.el7.rf.x86_64.rpm – Oleg Neumyvakin Dec 08 '15 at 06:41
  • This is great for cold winters with your laptop. – fregante Jan 12 '18 at 08:46
307

You can also do

dd if=/dev/zero of=/dev/null

To run more of those to put load on more cores, try to fork it:

fulload() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd

Repeat the command in the curly brackets as many times as the number of threads you want to produce (here 4 threads). Simple enter hit will stop it (just make sure no other dd is running on this user or you kill it too).

Community
  • 1
  • 1
dimba
  • 24,103
  • 28
  • 127
  • 188
  • 38
    dd deals more with I/O than with CPU usage – Fred May 28 '10 at 18:06
  • 2
    This actually worked the best for my situation. It also worked in Cygwin. For some reason, the other solutions wouldn't quite spike the CPU. Adding a count and making four processes in parallel worked perfectly. It spiked the CPU at 100% in top and then back down to zero without any help. Just four lines of code and a "wait". – User1 May 28 '10 at 22:30
  • 71
    Reading from `/dev/zero` and writing to `/dev/null` is not a very good load generator - you have to run a lot of them to generate significant load. Better to do something like `dd if=/dev/urandom | bzip2 -9 >> /dev/null`. `/dev/urandom` requires significantly more effort to generate output, and `bzip2` will expend a lot of effort trying to compress it, so the overall CPU usage is a lot higher than "fill a block with zeros, and then throw it away". – twalberg Sep 10 '13 at 15:46
  • 4
    Use `jobs -p | xargs kill` to only kill the processes you have created. – Marian Feb 10 '15 at 15:33
  • 6
    @twalberg, you should make your comment into an answer. – Aaron McDaid Jul 28 '15 at 09:55
  • Works great. In CentOS 7, i had to substitute 'killall dd' with 'pkill -e ^dd' – andy318 Mar 22 '17 at 23:05
  • 2
    instead of "bzip2" you can try "pbzip2" - it is multithreaded and will load all cores. – Dan Pritts Dec 01 '17 at 21:08
  • @twalberg, compressing from /dev/urandom on my system generates more load for dd than it does for bzip2. So, still probably not a very good load generator. A solution can be to chain the bzip2 commands so that the data random is reused and dd makes up a relatively small fraction of total cpu: dd if=/dev/urandom | bzip2 -9 | bzip2 -9 | bzip2 -9 | bzip2 -9 >> /dev/null – RoG Apr 08 '18 at 10:15
  • 1
    @RoG That's not inconsistent with the OP's requirement - they only asked for a way to make the CPUs busy, with no specification of whether it's kernel-mode or user-mode activity. `dd` from `/dev/urandom` will create high kernel-mode activity, and attempting to compress that data contributes a user-mode factor. Busy CPUs are busy.... – twalberg Apr 08 '18 at 11:49
  • Thanks @dimba and @twalberg ... the solution for me was `dd if=/dev/urandom of=/dev/null` which ran my cpu at 100% – Vince K Oct 13 '18 at 18:04
  • is this possible for windows? – Parvathy Sarat Nov 09 '18 at 03:56
  • If all CPUs are at 100% load in `top`, how is this solution "not a very good load generator"? Is there some other metric that should be measured? – User1 Apr 10 '19 at 21:16
  • For people with many CPUs to stress, I found that `cat /dev/zero | pbzip2` worked much better than `/dev/urandom` -- urandom was blocking on the apparently single-threaded generation of random data, while /dev/zero can produce zeroes much (much) faster. Even if it trivially compresses, having to do that work (along with distributing 900kB blocks of zeroes) produces nearly arbitrarily large CPU load. – James K Oct 14 '19 at 16:20
139

I think this one is more simpler. Open Terminal and type the following and press Enter.

yes > /dev/null &

To fully utilize modern CPUs, one line is not enough, you may need to repeat the command to exhaust all the CPU power.

To end all of this, simply put

killall yes

The idea was originally found here, although it was intended for Mac users, but this should work for *nix as well.

oz123
  • 23,317
  • 25
  • 106
  • 169
user1147015
  • 1,419
  • 1
  • 9
  • 2
  • 8
    +1 Works like a charm, thank you! **Worth adding**: this command will max out one hyperthread per cpu core. So a dual core cpu (each core having 2 threads) will get a total load of 25% per `yes` command (assuming the system was otherwise idle). – GitaarLAB Jan 13 '14 at 05:13
  • Just to add to this, Each iteration of this command adds 25 percent load on the CPU (Android) up to 4 iterations and the rest have no effect (even in terms of clock rate). – user3188978 Mar 16 '17 at 11:00
35

Although I'm late to the party, this post is among the top results in the google search "generate load in linux".

The result marked as solution could be used to generate a system load, i'm preferring to use sha1sum /dev/zero to impose a load on a cpu-core.

The idea is to calculate a hash sum from an infinite datastream (eg. /dev/zero, /dev/urandom, ...) this process will try to max out a cpu-core until the process is aborted. To generate a load for more cores, multiple commands can be piped together.

eg. generate a 2 core load: sha1sum /dev/zero | sha1sum /dev/zero

Mitms
  • 481
  • 4
  • 7
  • 1
    Seeing, this to be better than dd for CPU load. I get max cpu load of 44% on dd (6 times) and 86%+ on sha1sum . Thx~! – AAI Apr 03 '18 at 20:37
30

One core (doesn't invoke external process):

while true; do true; done

Two cores:

while true; do /bin/true; done

The latter only makes both of mine go to ~50% though...

This one will make both go to 100%:

while true; do echo; done
  • on echo, we loose the access to linux. how to put that 3rd command in background ? – AAI Apr 03 '18 at 20:26
  • 2
    Why does echo make all cpu cores to 100%? – Haoyuan Ge Apr 18 '18 at 03:16
  • @HaoyuanGe All the cpus are 100% only when echoing "nothing". Replace the do echo; with do echo "some very very long string"; to see <100% on the cpus. So, I believe, echoing nothing has lot less jumps and hence more code to execute (because of while true;) the cores are ~100% busy – talekeDskobeDa Feb 04 '19 at 08:52
  • If you want to keep a server responsive whilst running such a test, do it in a separate shell (another tmux/screen window, or ssh session), and renice that shell first, e.g. in bash: `renice 19 -p $$`. It'll still max the CPUs, but won't impinge on other processes. – Walf Feb 07 '19 at 05:27
28

To load 3 cores for 5 seconds:

seq 3 | xargs -P0 -n1 timeout 5 yes > /dev/null

This results in high kernel (sys) load from the many write() system calls.

If you prefer mostly userland cpu load:

seq 3 | xargs -P0 -n1 timeout 5 md5sum /dev/zero

If you just want the load to continue until you press Ctrl-C:

seq 3 | xargs -P0 -n1 md5sum /dev/zero
James Scriven
  • 6,719
  • 1
  • 28
  • 35
18

Here is a program that you can download Here

Install easily on your Linux system

./configure
make
make install

and launch it in a simple command line

stress -c 40

to stress all your CPUs (however you have) with 40 threads each running a complex sqrt computation on a ramdomly generated numbers.

You can even define the timeout of the program

stress -c 40 -timeout 10s

unlike the proposed solution with the dd command, which deals essentially with IO and therefore doesn't really overload your system because working with data.

The stress program really overloads the system because dealing with computation.

Fopa Léon Constantin
  • 10,665
  • 7
  • 38
  • 73
  • 4
    There is already an answer above for the `stress` command. As that answer says, you can just install it via yum/apt/etc. – Asfand Qazi Jul 22 '15 at 15:51
  • 1
    Website not in good condition (503 Forbidden) but is available on the repos :) – m3nda Aug 01 '15 at 06:48
12

An infinite loop is the idea I also had. A freaky-looking one is:

while :; do :; done

(: is the same as true, does nothing and exits with zero)

You can call that in a subshell and run in the background. Doing that $num_cores times should be enough. After sleeping the desired time you can kill them all, you get the PIDs with jobs -p (hint: xargs)

techraf
  • 53,268
  • 22
  • 149
  • 166
Marian
  • 4,839
  • 2
  • 16
  • 20
11
:(){ :|:& };:

This fork bomb will cause havoc to the CPU and will likely crash your computer.

Ruben
  • 8,812
  • 5
  • 31
  • 44
  • 14
    It'll help if I make it easier to read fork_bomb() { fork_bomb | fork_bomb & }; forkbomb – Jeff Goldstein May 27 '10 at 23:16
  • 17
    That one fails on the "last a designated amount of time and then stop" criterion ;) – Marian May 27 '10 at 23:26
  • 14
    looks like a bunch of smiley faces. – Ponkadoodle May 28 '10 at 01:07
  • 2
    This fork bomb crashed my computer, I had to do a hard power cycle. – Elijah Lynn Feb 13 '14 at 14:11
  • 2
    From: http://www.cyberciti.biz/faq/understanding-bash-fork-bomb/ WARNING! These examples may crash your computer if executed. "Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting the system as the only solution to a fork bomb is to destroy all instances of it." – Elijah Lynn Feb 13 '14 at 14:17
10

I would split the thing in 2 scripts :

infinite_loop.bash :

#!/bin/bash
while [ 1 ] ; do
    # Force some computation even if it is useless to actually work the CPU
    echo $((13**99)) 1>/dev/null 2>&1
done

cpu_spike.bash :

#!/bin/bash
# Either use environment variables for NUM_CPU and DURATION, or define them here
for i in `seq ${NUM_CPU}` : do
    # Put an infinite loop on each CPU
    infinite_loop.bash &
done

# Wait DURATION seconds then stop the loops and quit
sleep ${DURATION}
killall infinite_loop.bash
Fred
  • 4,733
  • 1
  • 27
  • 47
7
cat /dev/urandom > /dev/null
Evgeny
  • 5,247
  • 3
  • 49
  • 58
7

to increase load or consume CPU 100% or X%

sha1sum /dev/zero &

on some system this will increase the load in slots of X%, in that case you have to run the same command multiple time.

then you can see CPU uses by typing command

top

to release the load

killall sha1sum
44kksharma
  • 2,235
  • 22
  • 27
4

I've used bc (binary calculator), asking them for PI with a big lot of decimals.

$ for ((i=0;i<$NUMCPU;i++));do
    echo 'scale=100000;pi=4*a(1);0' | bc -l &
    done ;\
    sleep 4; \
    killall bc

with NUMCPU (under Linux):

$ NUMCPU=$(grep $'^processor\t*:' /proc/cpuinfo |wc -l)

This method is strong but seem system friendly, as I've never crashed a system using this.

F. Hauri
  • 51,421
  • 13
  • 88
  • 109
4
#!/bin/bash
duration=120    # seconds
instances=4     # cpus
endtime=$(($(date +%s) + $duration))
for ((i=0; i<instances; i++))
do
    while (($(date +%s) < $endtime)); do :; done &
done
Dennis Williamson
  • 303,596
  • 86
  • 357
  • 418
3
#!/bin/bash
while [ 1 ]
do
        #Your code goes here
done
Secko
  • 7,038
  • 4
  • 27
  • 35
2

I went through the Internet to find something like it and found this very handy cpu hammer script.

#!/bin/sh

# unixfoo.blogspot.com

if [ $1 ]; then
    NUM_PROC=$1
else
    NUM_PROC=10
fi

for i in `seq 0 $((NUM_PROC-1))`; do
    awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}' &
done
Ant4res
  • 1,187
  • 1
  • 15
  • 34
2

Using examples mentioned here, but also help from IRC, I developed my own CPU stress testing script. It uses a subshell per thread and the endless loop technique. You can also specify the number of threads and the amount of time interactively.

#!/bin/bash
# Simple CPU stress test script

# Read the user's input
echo -n "Number of CPU threads to test: "
read cpu_threads
echo -n "Duration of the test (in seconds): "
read cpu_time

# Run an endless loop on each thread to generate 100% CPU
echo -e "\E[32mStressing ${cpu_threads} threads for ${cpu_time} seconds...\E[37m"
for i in $(seq ${cpu_threads}); do
    let thread=${i}-1
    (taskset -cp ${thread} $BASHPID; while true; do true; done) &
done

# Once the time runs out, kill all of the loops
sleep ${cpu_time}
echo -e "\E[32mStressing complete.\E[37m"
kill 0
2

Utilizing ideas here, created code which exits automatically after a set duration, don't have to kill processes --

#!/bin/bash
echo "Usage : ./killproc_ds.sh 6 60  (6 threads for 60 secs)"

# Define variables
NUM_PROCS=${1:-6} #How much scaling you want to do
duration=${2:-20}    # seconds

function infinite_loop {
endtime=$(($(date +%s) + $duration))
while (($(date +%s) < $endtime)); do
    #echo $(date +%s)
    echo $((13**99)) 1>/dev/null 2>&1
    $(dd if=/dev/urandom count=10000 status=none| bzip2 -9 >> /dev/null) 2>&1 >&/dev/null
done
echo "Done Stressing the system - for thread $1"
}


echo Running for duration $duration secs, spawning $NUM_PROCS threads in background
for i in `seq ${NUM_PROCS}` ;
do
# Put an infinite loop
    infinite_loop $i  &
done
BoppreH
  • 5,650
  • 1
  • 28
  • 59
Dhiraj
  • 21
  • 1
1

This does a trick for me:

bash -c 'for (( I=100000000000000000000 ; I>=0 ; I++ )) ; do echo $(( I+I*I )) & echo $(( I*I-I )) & echo $(( I-I*I*I )) & echo $(( I+I*I*I )) ; done' &>/dev/null

and it uses nothing except bash.

ZyX
  • 49,123
  • 7
  • 101
  • 128
1

To enhance dimba's answer and provide something more pluggable (because i needed something similar). I have written the following using the dd load-up concept :D

It will check current cores, and create that many dd threads. Start and End core load with Enter

#!/bin/bash

load_dd() {
    dd if=/dev/zero of=/dev/null
}

fulload() {
    unset LOAD_ME_UP_SCOTTY
    export cores="$(grep proc /proc/cpuinfo -c)"
    for i in $( seq 1 $( expr $cores - 1 ) )
      do
    export LOAD_ME_UP_SCOTTY="${LOAD_ME_UP_SCOTTY}$(echo 'load_dd | ')"
  done
        export LOAD_ME_UP_SCOTTY="${LOAD_ME_UP_SCOTTY}$(echo 'load_dd &')"
    eval ${LOAD_ME_UP_SCOTTY}
}

echo press return to begin and stop fullload of cores
  read
  fulload
  read
  killall -9 dd
knope
  • 34
  • 7
1

Dimba's dd if=/dev/zero of=/dev/null is definitely correct, but also worth mentioning is verifying maxing the cpu to 100% usage. You can do this with

ps -axro pcpu | awk '{sum+=$1} END {print sum}'

This asks for ps output of a 1-minute average of the cpu usage by each process, then sums them with awk. While it's a 1 minute average, ps is smart enough to know if a process has only been around a few seconds and adjusts the time-window accordingly. Thus you can use this command to immediately see the result.

jeremysprofile
  • 6,648
  • 4
  • 25
  • 41
0

I combined some of the answers and added a way to scale the stress to all available cpus:

#!/bin/bash

function infinite_loop { 
    while [ 1 ] ; do
        # Force some computation even if it is useless to actually work the CPU
        echo $((13**99)) 1>/dev/null 2>&1
    done
}

# Either use environment variables for DURATION, or define them here
NUM_CPU=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || sysctl -n hw.ncpu)
PIDS=()
for i in `seq ${NUM_CPU}` ;
do
# Put an infinite loop on each CPU
    infinite_loop &
    PIDS+=("$!")
done

# Wait DURATION seconds then stop the loops and quit
sleep ${DURATION}

# Parent kills its children 
for pid in "${PIDS[@]}"
do
    kill $pid
done
joecks
  • 4,091
  • 33
  • 44
-1

Just paste this bad boy into the SSH or console of any server running linux. You can kill the processes manually, but I just shutdown the server when I'm done, quicker.

Edit: I have updated this script to now have a timer feature so that there is no need to kill the processes.

read -p "Please enter the number of minutes for test >" MINTEST && [[ $MINTEST == ?(-)+([0-9]) ]]; NCPU="$(grep -c ^processor /proc/cpuinfo)";  ((endtime=$(date +%s) + ($MINTEST*60))); NCPU=$((NCPU-1)); for ((i=1; i<=$NCPU; i++)); do while (($(date +%s) < $endtime)); do : ; done & done