598

I have a DataFrame with 4 columns of which 2 contain string values. I was wondering if there was a way to select rows based on a partial string match against a particular column?

In other words, a function or lambda function that would do something like

re.search(pattern, cell_in_question) 

returning a boolean. I am familiar with the syntax of df[df['A'] == "hello world"] but can't seem to find a way to do the same with a partial string match say 'hello'.

Would someone be able to point me in the right direction?

cs95
  • 274,032
  • 76
  • 480
  • 537
euforia
  • 6,065
  • 3
  • 12
  • 5

14 Answers14

982

Based on github issue #620, it looks like you'll soon be able to do the following:

df[df['A'].str.contains("hello")]

Update: vectorized string methods (i.e., Series.str) are available in pandas 0.8.1 and up.

hlin117
  • 16,266
  • 25
  • 66
  • 87
Garrett
  • 35,981
  • 5
  • 54
  • 47
  • 2
    How do we go about "Hello" and "Britain" if I want to find them with "OR" condition. – LonelySoul Jun 27 '13 at 16:41
  • 78
    Since str.* methods treat the input pattern as a regular expression, you can use `df[df['A'].str.contains("Hello|Britain")]` – Garrett Jun 27 '13 at 19:20
  • 9
    Is it possible to convert `.str.contains` to use [`.query()` api](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html#pandas.DataFrame.query)? – zyxue Mar 01 '17 at 17:25
  • 3
    @zyxue [Select rows by partial string query with pandas](https://stackoverflow.com/q/44933071/395857) – Franck Dernoncourt Jul 05 '17 at 18:01
  • 4
    `df[df['value'].astype(str).str.contains('1234.+')]` for filtering out non-string-type columns. – François Leblanc Feb 13 '18 at 20:22
  • What about "AND" condition - how do you look for multiple strings on the same time, so not just `"hello"`, but `["hello", "hey", "hi"]`? – NeStack Nov 30 '18 at 17:32
  • 1
    to "AND" substrings when the order of substrings is important/known, you could use `df[df.A.str.contains("STR1.*STR2")]`. if order is unimportant/unknown, `df[df.A.str.contains("STR1") & df.A.str.contains("STR2")]` – Garrett Nov 30 '18 at 19:35
  • 1
    @NeStack I've added more information about multiple substring searches [here](https://stackoverflow.com/a/55335207/4909087). – cs95 Mar 25 '19 at 20:20
  • If there are nulls in the column, one must also include the flag to ignore these (if desired): `df[df['A'].str.contains("hello", na=False)]` – defraggled Sep 29 '20 at 02:48
  • how to apply if else e.g if contains ;dothis else do this – G.ONE Jan 20 '21 at 12:46
276

I tried the proposed solution above:

df[df["A"].str.contains("Hello|Britain")]

and got an error:

ValueError: cannot mask with array containing NA / NaN values

you can transform NA values into False, like this:

df[df["A"].str.contains("Hello|Britain", na=False)]
AMC
  • 2,466
  • 7
  • 11
  • 31
sharon
  • 3,418
  • 1
  • 14
  • 10
201

How do I select by partial string from a pandas DataFrame?

This post is meant for readers who want to

  • search for a substring in a string column (the simplest case)
  • search for multiple substrings (similar to isin)
  • match a whole word from text (e.g., "blue" should match "the sky is blue" but not "bluejay")
  • match multiple whole words
  • Understand the reason behind "ValueError: cannot index with vector containing NA / NaN values"

...and would like to know more about what methods should be preferred over others.

(P.S.: I've seen a lot of questions on similar topics, I thought it would be good to leave this here.)

Friendly disclaimer, this is post is long.


Basic Substring Search

# setup
df1 = pd.DataFrame({'col': ['foo', 'foobar', 'bar', 'baz']})
df1

      col
0     foo
1  foobar
2     bar
3     baz

str.contains can be used to perform either substring searches or regex based search. The search defaults to regex-based unless you explicitly disable it.

Here is an example of regex-based search,

# find rows in `df1` which contain "foo" followed by something
df1[df1['col'].str.contains(r'foo(?!$)')]

      col
1  foobar

Sometimes regex search is not required, so specify regex=False to disable it.

#select all rows containing "foo"
df1[df1['col'].str.contains('foo', regex=False)]
# same as df1[df1['col'].str.contains('foo')] but faster.
   
      col
0     foo
1  foobar

Performance wise, regex search is slower than substring search:

df2 = pd.concat([df1] * 1000, ignore_index=True)

%timeit df2[df2['col'].str.contains('foo')]
%timeit df2[df2['col'].str.contains('foo', regex=False)]

6.31 ms ± 126 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.8 ms ± 241 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Avoid using regex-based search if you don't need it.

Addressing ValueErrors
Sometimes, performing a substring search and filtering on the result will result in

ValueError: cannot index with vector containing NA / NaN values

This is usually because of mixed data or NaNs in your object column,

s = pd.Series(['foo', 'foobar', np.nan, 'bar', 'baz', 123])
s.str.contains('foo|bar')

0     True
1     True
2      NaN
3     True
4    False
5      NaN
dtype: object


s[s.str.contains('foo|bar')]
# ---------------------------------------------------------------------------
# ValueError                                Traceback (most recent call last)

Anything that is not a string cannot have string methods applied on it, so the result is NaN (naturally). In this case, specify na=False to ignore non-string data,

s.str.contains('foo|bar', na=False)

0     True
1     True
2    False
3     True
4    False
5    False
dtype: bool

How do I apply this to multiple columns at once?
The answer is in the question. Use DataFrame.apply:

# `axis=1` tells `apply` to apply the lambda function column-wise.
df.apply(lambda col: col.str.contains('foo|bar', na=False), axis=1)

       A      B
0   True   True
1   True  False
2  False   True
3   True  False
4  False  False
5  False  False

All of the solutions below can be "applied" to multiple columns using the column-wise apply method (which is OK in my book, as long as you don't have too many columns).

If you have a DataFrame with mixed columns and want to select only the object/string columns, take a look at select_dtypes.


Multiple Substring Search

This is most easily achieved through a regex search using the regex OR pipe.

# Slightly modified example.
df4 = pd.DataFrame({'col': ['foo abc', 'foobar xyz', 'bar32', 'baz 45']})
df4

          col
0     foo abc
1  foobar xyz
2       bar32
3      baz 45

df4[df4['col'].str.contains(r'foo|baz')]

          col
0     foo abc
1  foobar xyz
3      baz 45

You can also create a list of terms, then join them:

terms = ['foo', 'baz']
df4[df4['col'].str.contains('|'.join(terms))]

          col
0     foo abc
1  foobar xyz
3      baz 45

Sometimes, it is wise to escape your terms in case they have characters that can be interpreted as regex metacharacters. If your terms contain any of the following characters...

. ^ $ * + ? { } [ ] \ | ( )

Then, you'll need to use re.escape to escape them:

import re
df4[df4['col'].str.contains('|'.join(map(re.escape, terms)))]

          col
0     foo abc
1  foobar xyz
3      baz 45

re.escape has the effect of escaping the special characters so they're treated literally.

re.escape(r'.foo^')
# '\\.foo\\^'

Matching Entire Word(s)

By default, the substring search searches for the specified substring/pattern regardless of whether it is full word or not. To only match full words, we will need to make use of regular expressions here—in particular, our pattern will need to specify word boundaries (\b).

For example,

df3 = pd.DataFrame({'col': ['the sky is blue', 'bluejay by the window']})
df3

                     col
0        the sky is blue
1  bluejay by the window
 

Now consider,

df3[df3['col'].str.contains('blue')]

                     col
0        the sky is blue
1  bluejay by the window

v/s

df3[df3['col'].str.contains(r'\bblue\b')]

               col
0  the sky is blue

Multiple Whole Word Search

Similar to the above, except we add a word boundary (\b) to the joined pattern.

p = r'\b(?:{})\b'.format('|'.join(map(re.escape, terms)))
df4[df4['col'].str.contains(p)]

       col
0  foo abc
3   baz 45

Where p looks like this,

p
# '\\b(?:foo|baz)\\b'

A Great Alternative: Use List Comprehensions!

Because you can! And you should! They are usually a little bit faster than string methods, because string methods are hard to vectorise and usually have loopy implementations.

Instead of,

df1[df1['col'].str.contains('foo', regex=False)]

Use the in operator inside a list comp,

df1[['foo' in x for x in df1['col']]]

       col
0  foo abc
1   foobar

Instead of,

regex_pattern = r'foo(?!$)'
df1[df1['col'].str.contains(regex_pattern)]

Use re.compile (to cache your regex) + Pattern.search inside a list comp,

p = re.compile(regex_pattern, flags=re.IGNORECASE)
df1[[bool(p.search(x)) for x in df1['col']]]

      col
1  foobar

If "col" has NaNs, then instead of

df1[df1['col'].str.contains(regex_pattern, na=False)]

Use,

def try_search(p, x):
    try:
        return bool(p.search(x))
    except TypeError:
        return False

p = re.compile(regex_pattern)
df1[[try_search(p, x) for x in df1['col']]]

      col
1  foobar
 

More Options for Partial String Matching: np.char.find, np.vectorize, DataFrame.query.

In addition to str.contains and list comprehensions, you can also use the following alternatives.

np.char.find
Supports substring searches (read: no regex) only.

df4[np.char.find(df4['col'].values.astype(str), 'foo') > -1]

          col
0     foo abc
1  foobar xyz

np.vectorize
This is a wrapper around a loop, but with lesser overhead than most pandas str methods.

f = np.vectorize(lambda haystack, needle: needle in haystack)
f(df1['col'], 'foo')
# array([ True,  True, False, False])

df1[f(df1['col'], 'foo')]

       col
0  foo abc
1   foobar

Regex solutions possible:

regex_pattern = r'foo(?!$)'
p = re.compile(regex_pattern)
f = np.vectorize(lambda x: pd.notna(x) and bool(p.search(x)))
df1[f(df1['col'])]

      col
1  foobar

DataFrame.query
Supports string methods through the python engine. This offers no visible performance benefits, but is nonetheless useful to know if you need to dynamically generate your queries.

df1.query('col.str.contains("foo")', engine='python')

      col
0     foo
1  foobar

More information on query and eval family of methods can be found at Dynamic Expression Evaluation in pandas using pd.eval().


Recommended Usage Precedence

  1. (First) str.contains, for its simplicity and ease handling NaNs and mixed data
  2. List comprehensions, for its performance (especially if your data is purely strings)
  3. np.vectorize
  4. (Last) df.query
cs95
  • 274,032
  • 76
  • 480
  • 537
  • 1
    Could you edit in the correct method to use when searching for a string in two or more columns? Basically: `any(needle in haystack for needling in ['foo', 'bar'] and haystack in (df['col'], df['col2']))` and variations I tried all choke (it complains about `any()` and rightly so... But the doc is blissfully unclear as to how to do such a query. – Denis de Bernardy Jul 16 '19 at 11:37
  • 1
    @DenisdeBernardy `df[['col1', 'col2']].apply(lambda x: x.str.contains('foo|bar')).any(axis=1)` – cs95 Jul 28 '19 at 06:30
  • @cs95 [Extracting rows with substring containing whitespace after + in pandas df](https://stackoverflow.com/questions/57238715/extracting-rows-with-substring-containing-whitespace-after-in-pandas-df) It was answered soon, but you might want to have a look at it. –  Jul 28 '19 at 07:41
  • @ankiiiiiii Looks like you missed the part of my answer where I mentioned regex metacharacters: "Sometimes, it is wise to escape your terms in case they have characters that can be interpreted as regex metacharacters". – cs95 Jul 28 '19 at 07:53
  • @cs95 at that time, I thought only of space as the problem and searched this page and documentation page for the same. While writing the question, special char came to my mind and I got quick answers before I could search for it. Great answer of yours! +1 –  Jul 28 '19 at 07:55
  • Very helpful! Why do you need to insert `r` if regex = True by default? eg here: `df1[df1['col'].str.contains(r'foo(?!$)')]` – 00schneider Aug 13 '19 at 10:54
  • 1
    @00schneider r in this case is used to indicate a raw string literal. These make it easier to write regular expression strings. https://stackoverflow.com/q/2081640/ – cs95 Aug 13 '19 at 13:11
  • How to use this for finding values in multiple columns? – Murtaza Haji Apr 16 '20 at 04:09
  • @MurtazaHaji you can .apply() any of these solutions across multiple columns. For more info on how to apply an operation column wise take a look at https://stackoverflow.com/questions/54432583/when-should-i-ever-want-to-use-pandas-apply-in-my-code – cs95 Jun 05 '20 at 17:48
  • @MurtazaHaji Made an edit to my answer that shows you how to do that (it's in the "Basic Substring Search" section). – cs95 Jul 08 '20 at 07:24
58

If anyone wonders how to perform a related problem: "Select column by partial string"

Use:

df.filter(like='hello')  # select columns which contain the word hello

And to select rows by partial string matching, pass axis=0 to filter:

# selects rows which contain the word hello in their index label
df.filter(like='hello', axis=0)  
ayhan
  • 57,258
  • 12
  • 148
  • 173
Philipp Schwarz
  • 11,565
  • 4
  • 30
  • 34
  • 6
    This can be distilled to: `df.loc[:, df.columns.str.contains('a')]` – elPastor Jun 17 '17 at 21:53
  • 19
    which can be further distilled to `df.filter(like='a')` – Ted Petrou Oct 25 '17 at 02:57
  • this should be an own question + answer, already 50 people searched for it... – PV8 Jan 09 '20 at 09:35
  • 1
    @PV8 question already exists: https://stackoverflow.com/questions/31551412/how-to-select-dataframe-columns-based-on-partial-matching. But when I search on google for "pandas Select column by partial string", this thread appears first – Philipp Schwarz Jan 09 '20 at 09:37
29

Quick note: if you want to do selection based on a partial string contained in the index, try the following:

df['stridx']=df.index
df[df['stridx'].str.contains("Hello|Britain")]
Christian
  • 299
  • 3
  • 2
22

Say you have the following DataFrame:

>>> df = pd.DataFrame([['hello', 'hello world'], ['abcd', 'defg']], columns=['a','b'])
>>> df
       a            b
0  hello  hello world
1   abcd         defg

You can always use the in operator in a lambda expression to create your filter.

>>> df.apply(lambda x: x['a'] in x['b'], axis=1)
0     True
1    False
dtype: bool

The trick here is to use the axis=1 option in the apply to pass elements to the lambda function row by row, as opposed to column by column.

Mike
  • 5,433
  • 3
  • 22
  • 46
  • How do I modify above to say that x['a'] exists only in beginning of x['b']? – ComplexData Oct 18 '16 at 20:23
  • 1
    apply is a bad idea here in terms of performance and memory. See [this answer](https://stackoverflow.com/questions/54432583/when-should-i-ever-want-to-use-pandas-apply-in-my-code). – cs95 Mar 25 '19 at 10:27
10

Should you need to do a case insensitive search for a string in a pandas dataframe column:

df[df['A'].str.contains("hello", case=False)]
cardamom
  • 5,099
  • 3
  • 35
  • 77
6

Here's what I ended up doing for partial string matches. If anyone has a more efficient way of doing this please let me know.

def stringSearchColumn_DataFrame(df, colName, regex):
    newdf = DataFrame()
    for idx, record in df[colName].iteritems():

        if re.search(regex, record):
            newdf = concat([df[df[colName] == record], newdf], ignore_index=True)

    return newdf
euforia
  • 6,065
  • 3
  • 12
  • 5
  • 3
    Should be 2x to 3x faster if you compile regex before loop: regex = re.compile(regex) and then if regex.search(record) – MarkokraM Apr 10 '14 at 13:56
  • 1
    @MarkokraM https://docs.python.org/3.6/library/re.html#re.compile says that the most recent regexs are cached for you, so you don't need to compile yourself. – Teepeemm Jun 20 '18 at 19:36
  • Do not use iteritems to iterate over a DataFrame. It ranks last in terms of pandorability and performance – cs95 Mar 25 '19 at 10:26
  • iterating over dataframes defeats the entire purpose of pandas. Use Garrett's solution instead – dhruvm Jul 22 '20 at 02:12
5

Using contains didn't work well for my string with special characters. Find worked though.

df[df['A'].str.find("hello") != -1]
Katu
  • 739
  • 1
  • 17
  • 27
5

Suppose we have a column named "ENTITY" in the dataframe df. We can filter our df,to have the entire dataframe df, wherein rows of "entity" column doesn't contain "DM" by using a mask as follows:

mask = df['ENTITY'].str.contains('DM')

df = df.loc[~(mask)].copy(deep=True)
Niels Henkens
  • 2,101
  • 1
  • 9
  • 21
2

There are answers before this which accomplish the asked feature, anyway I would like to show the most generally way:

df.filter(regex=".*STRING_YOU_LOOK_FOR.*")

This way let's you get the column you look for whatever the way is wrote.

( Obviusly, you have to write the proper regex expression for each case )

xpeiro
  • 519
  • 3
  • 18
2

Maybe you want to search for some text in all columns of the Pandas dataframe, and not just in the subset of them. In this case, the following code will help.

df[df.apply(lambda row: row.astype(str).str.contains('String To Find').any(), axis=1)]

Warning. This method is relatively slow, albeit convenient.

2

A more generalised example - if looking for parts of a word OR specific words in a string:

df = pd.DataFrame([('cat andhat', 1000.0), ('hat', 2000000.0), ('the small dog', 1000.0), ('fog', 330000.0),('pet', 330000.0)], columns=['col1', 'col2'])

Specific parts of sentence or word:

searchfor = '.*cat.*hat.*|.*the.*dog.*'

Creat column showing the affected rows (can always filter out as necessary)

df["TrueFalse"]=df['col1'].str.contains(searchfor, regex=True)

    col1             col2           TrueFalse
0   cat andhat       1000.0         True
1   hat              2000000.0      False
2   the small dog    1000.0         True
3   fog              330000.0       False
4   pet 3            30000.0        False
Grant Shannon
  • 3,047
  • 1
  • 28
  • 26
0

You can try considering them as string as :

df[df['A'].astype(str).str.contains("Hello|Britain")]
user288925
  • 763
  • 1
  • 5
  • 12