94

If you came here looking for information on how to merge a DataFrame and Series on the index, please look at this answer.

The OP's original intention was to ask how to assign series elements as columns to another DataFrame. If you are interested in knowing the answer to this, look at the accepted answer by EdChum.


Best I can come up with is

df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]})  # see EDIT below
s = pd.Series({'s1':5, 's2':6})

for name in s.index:
    df[name] = s[name]

   a  b  s1  s2
0  1  3   5   6
1  2  4   5   6

Can anybody suggest better syntax / faster method?

My attempts:

df.merge(s)
AttributeError: 'Series' object has no attribute 'columns'

and

df.join(s)
ValueError: Other Series must have a name

EDIT The first two answers posted highlighted a problem with my question, so please use the following to construct df:

df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6])

with the final result

    a  b  s1  s2
3 NaN  4   5   6
5   2  5   5   6
6   3  6   5   6
cs95
  • 274,032
  • 76
  • 480
  • 537
Nathan Lloyd
  • 1,353
  • 1
  • 10
  • 17

7 Answers7

179

Update
From v0.24.0 onwards, you can merge on DataFrame and Series as long as the Series is named.

df.merge(s.rename('new'), left_index=True, right_index=True)
# If series is already named,
# df.merge(s, left_index=True, right_index=True)

Nowadays, you can simply convert the Series to a DataFrame with to_frame(). So (if joining on index):

df.merge(s.to_frame(), left_index=True, right_index=True)
cs95
  • 274,032
  • 76
  • 480
  • 537
Nicholas Morley
  • 2,612
  • 2
  • 12
  • 14
  • 7
    Using the question's definitions of `df` and `s`, this answer returns for me an empty dataframe, *not* the result requested in the question. We don't want to match on index; we want to broadcast the `s` values to all rows of `df`. – CPBL May 20 '18 at 17:20
  • 2
    This is solving a different problem: "given a DataFrame and Series, how can they be merged on the index". OP's question was "assign each element of a Series as a new column in a DataFrame". – cs95 Dec 08 '18 at 19:35
28

You could construct a dataframe from the series and then merge with the dataframe. So you specify the data as the values but multiply them by the length, set the columns to the index and set params for left_index and right_index to True:

In [27]:

df.merge(pd.DataFrame(data = [s.values] * len(s), columns = s.index), left_index=True, right_index=True)
Out[27]:
   a  b  s1  s2
0  1  3   5   6
1  2  4   5   6

EDIT for the situation where you want the index of your constructed df from the series to use the index of the df then you can do the following:

df.merge(pd.DataFrame(data = [s.values] * len(df), columns = s.index, index=df.index), left_index=True, right_index=True)

This assumes that the indices match the length.

EdChum
  • 294,303
  • 173
  • 671
  • 486
6

Here's one way:

df.join(pd.DataFrame(s).T).fillna(method='ffill')

To break down what happens here...

pd.DataFrame(s).T creates a one-row DataFrame from s which looks like this:

   s1  s2
0   5   6

Next, join concatenates this new frame with df:

   a  b  s1  s2
0  1  3   5   6
1  2  4 NaN NaN

Lastly, the NaN values at index 1 are filled with the previous values in the column using fillna with the forward-fill (ffill) argument:

   a  b  s1  s2
0  1  3   5   6
1  2  4   5   6

To avoid using fillna, it's possible to use pd.concat to repeat the rows of the DataFrame constructed from s. In this case, the general solution is:

df.join(pd.concat([pd.DataFrame(s).T] * len(df), ignore_index=True))

Here's another solution to address the indexing challenge posed in the edited question:

df.join(pd.DataFrame(s.repeat(len(df)).values.reshape((len(df), -1), order='F'), 
        columns=s.index, 
        index=df.index))

s is transformed into a DataFrame by repeating the values and reshaping (specifying 'Fortran' order), and also passing in the appropriate column names and index. This new DataFrame is then joined to df.

Alex Riley
  • 132,653
  • 39
  • 224
  • 205
  • Nice one-liner, a caveat is that any NaN's already in df will get filled as well. – Nathan Lloyd Oct 09 '14 at 00:11
  • @Nonth Thanks and good point. I've edited to include an alternative which avoid filling in `NaN` values. – Alex Riley Oct 09 '14 at 08:58
  • What happened with EdChums original answer affects this revised answer. If I construct the df with, say, `index=[3, 5]`, the new columns contain nan's after your command. – Nathan Lloyd Oct 10 '14 at 21:29
  • @Nonth Edited again! It should now meet your new requirements. – Alex Riley Oct 10 '14 at 22:27
  • your answer is 20x faster, but it's still a difference of ~100ms with df at 1e5 rows. My for loop is horrifically slow. BTW in your answer the `2` ought to be `len(df)` to be generally applicable. – Nathan Lloyd Oct 10 '14 at 23:35
  • Greetings! Your Edit 2 no longer appears to work. I get `ValueError: Shape of passed values is (2, 2), indices imply (2, 3)` when I try and run this. – cs95 Dec 08 '18 at 09:58
  • @coldspeed: thanks for pointing out the issue! I've edited the answer: hopefully the code works and is more general now. – Alex Riley Dec 09 '18 at 13:36
0

If I could suggest setting up your dataframes like this (auto-indexing):

df = pd.DataFrame({'a':[np.nan, 1, 2], 'b':[4, 5, 6]})

then you can set up your s1 and s2 values thus (using shape() to return the number of rows from df):

s = pd.DataFrame({'s1':[5]*df.shape[0], 's2':[6]*df.shape[0]})

then the result you want is easy:

display (df.merge(s, left_index=True, right_index=True))

Alternatively, just add the new values to your dataframe df:

df = pd.DataFrame({'a':[nan, 1, 2], 'b':[4, 5, 6]})
df['s1']=5
df['s2']=6
display(df)

Both return:

     a  b  s1  s2
0  NaN  4   5   6
1  1.0  5   5   6
2  2.0  6   5   6

If you have another list of data (instead of just a single value to apply), and you know it is in the same sequence as df, eg:

s1=['a','b','c']

then you can attach this in the same way:

df['s1']=s1

returns:

     a  b s1
0  NaN  4  a
1  1.0  5  b
2  2.0  6  c
James
  • 515
  • 5
  • 16
0

You can easily set a pandas.DataFrame column to a constant. This constant can be an int such as in your example. If the column you specify isn't in the df, then pandas will create a new column with the name you specify. So after your dataframe is constructed, (from your question):

df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6])

You can just run:

df['s1'], df['s2'] = 5, 6

You could write a loop or comprehension to make it do this for all the elements in a list of tuples, or keys and values in a dictionary depending on how you have your real data stored.

Alex
  • 1,330
  • 2
  • 16
  • 34
0

If df is a pandas.DataFrame then df['new_col']= Series list_object of length len(df) will add the or Series list_object as a column named 'new_col'. df['new_col']= scalar (such as 5 or 6 in your case) also works and is equivalent to df['new_col']= [scalar]*len(df)

So a two-line code serves the purpose:

df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]})
s = pd.Series({'s1':5, 's2':6})
for x in s.index:    
    df[x] = s[x]

Output: 
   a  b  s1  s2
0  1  3   5   6
1  2  4   5   6
0

Nowadays, much simpler and concise solution can achieve the same task. Leveraging the capability of DataFrame.apply() to turn a Series into columns of its belonging DataFrame, we can use:

df.join(df.apply(lambda x: s, axis=1))

Result:

     a  b  s1  s2
3  NaN  4   5   6
5  2.0  5   5   6
6  3.0  6   5   6

Here, we used DataFrame.apply() with a simple lambda function as the applied function on axis=1. The applied lambda function simply just returns the Series s:

df.apply(lambda x: s, axis=1)

Result:

   s1  s2
3   5   6
5   5   6
6   5   6

The result has already inherited the row index of the original DataFrame df. Consequently, we can simply join df with this interim result by DataFrame.join() to get the desired final result (since they have the same row index).

This capability of DataFrame.apply() to turn a Series into columns of its belonging DataFrame is well documented in the official document as follows:

By default (result_type=None), the final return type is inferred from the return type of the applied function.


The default behaviour (result_type=None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns.

The official document also includes example of such usage:

Returning a Series inside the function is similar to passing result_type='expand'. The resulting column names will be the Series index.

df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)   

   foo  bar
0    1    2
1    1    2
2    1    2
SeaBean
  • 6,349
  • 1
  • 3
  • 18
  • @jezrael Totally agree that `apply` is mostly a convenient tool than performance efficient tool, especially on `axis=1`, though there are [some legitimate uses of `apply`](https://stackoverflow.com/a/54433552/15070697) in some use cases. – SeaBean Apr 23 '21 at 11:00
  • 1
    sure, I dont think apply is bad, only be carefull for use it ;) – jezrael Apr 23 '21 at 11:01