137

I have a pandas dataframe with the following column names:

Result1, Test1, Result2, Test2, Result3, Test3, etc...

I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function.

How can I do that?

cs95
  • 274,032
  • 76
  • 480
  • 537
Alexis Eggermont
  • 5,595
  • 14
  • 49
  • 84

11 Answers11

205

Here is one way to do this:

df = df[df.columns.drop(list(df.filter(regex='Test')))]
cs95
  • 274,032
  • 76
  • 480
  • 537
Bindiya12
  • 2,223
  • 2
  • 6
  • 11
  • 58
    Or directly in place: `df.drop(list(df.filter(regex = 'Test')), axis = 1, inplace = True)` – Axel Nov 15 '17 at 13:46
  • 8
    This is a much more elegant solution than the accepted answer. I would break it down a bit more to show why, mainly extracting `list(df.filter(regex='Test'))` to better show what the line is doing. I would also opt for `df.filter(regex='Test').columns` over list conversion – Charles Mar 13 '18 at 23:12
  • 3
    This one is way more elegant than the accepted answer. – deepelement Oct 12 '18 at 00:22
  • 5
    I really wonder what the comments saying this answer is "elegant" means. I myself find it quite obfuscated, when python code should first be readable. It also is twice as slower than the first answer. And it uses the `regex` keyword when the `like` keyword seems to be more adequate. – Jacquot Mar 08 '19 at 09:15
  • 4
    This is not actually as good an answer as people claim. The problem with `filter` is that it _returns a copy of ALL the data as columns_ that you want to drop. It is wasteful if you're only passing this result to `drop` (which again returns a copy)... a better solution would be `str.startswith` (I've added an [answer](https://stackoverflow.com/a/54410702/4909087) with that here). – cs95 May 31 '19 at 03:58
  • My most concise version is `df.drop(columns=df.filter(like='SomeString').columns)`, which returns a copy of the DataFrame without the columns that contain `"SomeString"`. – Migwell Mar 14 '21 at 02:26
  • Merci beaucoup, même après 7ans c'est toujours utile!!! – Sarindra Thérèse May 07 '21 at 13:18
98
import pandas as pd

import numpy as np

array=np.random.random((2,4))

df=pd.DataFrame(array, columns=('Test1', 'toto', 'test2', 'riri'))

print df

      Test1      toto     test2      riri
0  0.923249  0.572528  0.845464  0.144891
1  0.020438  0.332540  0.144455  0.741412

cols = [c for c in df.columns if c.lower()[:4] != 'test']

df=df[cols]

print df
       toto      riri
0  0.572528  0.144891
1  0.332540  0.741412
Nic
  • 2,714
  • 2
  • 17
  • 30
66

Cheaper, Faster, and Idiomatic: str.contains

In recent versions of pandas, you can use string methods on the index and columns. Here, str.startswith seems like a good fit.

To remove all columns starting with a given substring:

df.columns.str.startswith('Test')
# array([ True, False, False, False])

df.loc[:,~df.columns.str.startswith('Test')]

  toto test2 riri
0    x     x    x
1    x     x    x

For case-insensitive matching, you can use regex-based matching with str.contains with an SOL anchor:

df.columns.str.contains('^test', case=False)
# array([ True, False,  True, False])

df.loc[:,~df.columns.str.contains('^test', case=False)] 

  toto riri
0    x    x
1    x    x

if mixed-types is a possibility, specify na=False as well.

cs95
  • 274,032
  • 76
  • 480
  • 537
20

You can filter out the columns you DO want using 'filter'

import pandas as pd
import numpy as np

data2 = [{'test2': 1, 'result1': 2}, {'test': 5, 'result34': 10, 'c': 20}]

df = pd.DataFrame(data2)

df

    c   result1     result34    test    test2
0   NaN     2.0     NaN     NaN     1.0
1   20.0    NaN     10.0    5.0     NaN

Now filter

df.filter(like='result',axis=1)

Get..

   result1  result34
0   2.0     NaN
1   NaN     10.0
SAH
  • 249
  • 3
  • 8
19

This can be done neatly in one line with:

df = df.drop(df.filter(regex='Test').columns, axis=1)
Warren O'Neill
  • 378
  • 3
  • 6
11

Use the DataFrame.select method:

In [38]: df = DataFrame({'Test1': randn(10), 'Test2': randn(10), 'awesome': randn(10)})

In [39]: df.select(lambda x: not re.search('Test\d+', x), axis=1)
Out[39]:
   awesome
0    1.215
1    1.247
2    0.142
3    0.169
4    0.137
5   -0.971
6    0.736
7    0.214
8    0.111
9   -0.214
Phillip Cloud
  • 21,840
  • 9
  • 63
  • 86
5

This method does everything in place. Many of the other answers create copies and are not as efficient:

df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)

winderland
  • 96
  • 1
  • 5
3

Don't drop. Catch the opposite of what you want.

df = df.filter(regex='^((?!badword).)*$').columns
Boken
  • 3,207
  • 9
  • 25
  • 31
Roy Assis
  • 31
  • 1
  • 4
2

the shortest way to do is is :

resdf = df.filter(like='Test',axis=1)
ZacNt
  • 29
  • 2
2

Question states 'I want to drop all the columns whose name contains the word "Test".'

test_columns = [col for col in df if 'Test' in col]
df.drop(columns=test_columns, inplace=True)
Marvasti
  • 19
  • 3
0

Solution when dropping a list of column names containing regex. I prefer this approach because I'm frequently editing the drop list. Uses a negative filter regex for the drop list.

drop_column_names = ['A','B.+','C.*']
drop_columns_regex = '^(?!(?:'+'|'.join(drop_column_names)+')$)'
print('Dropping columns:',', '.join([c for c in df.columns if re.search(drop_columns_regex,c)]))
df = df.filter(regex=drop_columns_regex,axis=1)
BSalita
  • 6,917
  • 5
  • 48
  • 60