1302

I have JSON data stored in the variable data.

I want to write this to a text file for testing so I don't have to grab the data from the server each time.

Currently, I am trying this:

obj = open('data.txt', 'wb')
obj.write(data)
obj.close

And I am receiving this error:

TypeError: must be string or buffer, not dict

How to fix this?

sentence
  • 5,556
  • 4
  • 20
  • 33
user1530318
  • 19,347
  • 13
  • 31
  • 46
  • For flags when opening file: Here, we used "w" letter in our argument, which indicates write and will create a file if it does not exist in library Plus sign indicates both read and write, https://www.guru99.com/reading-and-writing-files-in-python.html#1 – Charlie Parker Sep 22 '20 at 17:14

14 Answers14

2324

You forgot the actual JSON part - data is a dictionary and not yet JSON-encoded. Write it like this for maximum compatibility (Python 2 and 3):

import json
with open('data.json', 'w') as f:
    json.dump(data, f)

On a modern system (i.e. Python 3 and UTF-8 support), you can write a nicer file with

import json
with open('data.json', 'w', encoding='utf-8') as f:
    json.dump(data, f, ensure_ascii=False, indent=4)
mrgloom
  • 15,245
  • 23
  • 126
  • 226
phihag
  • 245,801
  • 63
  • 407
  • 443
  • 9
    this might be helpful for serializing: http://stackoverflow.com/questions/4512982/python-typeerror-cant-write-str-to-text-stream – jedierikb Feb 11 '13 at 17:27
  • 13
    Do you mean json.dump or json.dumps? – TerminalDilettante Aug 13 '15 at 14:46
  • 187
    @TerminalDilettante `json.dump` writes to a file or file-like object, whereas `json.dumps` returns a string. – phihag Aug 13 '15 at 20:58
  • 29
    btw: to re read the data use: with open('data.txt') as infile: d = json.load(infile). See: [this answer](http://stackoverflow.com/questions/20199126/reading-a-json-file-using-python) – klaas Mar 07 '16 at 12:59
  • Should this be "wb" instead of "w" (speed etc.)? – denvar Apr 19 '16 at 18:40
  • 9
    @denvar No, this answer is finely tuned. On Python 3, `json.dump` writes to a text file, not a binary file. You'd get a `TypeError` if the file was opened with `wb`. On older Python versions, both `w` nand `wb` work. An explicit encoding is not necessary since the output of `json.dump` is ASCII-only by default. If you can be sure that your code is never run on legacy Python versions and you and the handler of the JSON file can correctly handle non-ASCII data, you can specify one and set `ensure_ascii=False`. – phihag Apr 19 '16 at 18:47
  • 1
    @marctrem Can you elaborate why you think that? The question asks about writing to a file called `data.txt`. In any case, the file name does not matter. – phihag Dec 13 '16 at 20:18
  • Yes, it works. But this solution gives 7bit output for both python2 and python3: all non-ascii characters are encoded as ascii (for example `'π'` is encoded as 6 bytes: `'\u03c0'`). It's a good idea today to have utf8-encoded json files. They are upto 3 times smaller (`'π'` is encoded as just two bytes: `b'\xcf\x80'`) plus they are readable in any modern editor (with readability being one of the major advantages of json as compared to xml). See [my answer](http://stackoverflow.com/a/14870531/237105) below for details. – Antony Hatchkins Feb 10 '17 at 10:05
  • I hit into ` ` `File "/Users/gg4u/Sites/expo_recipes_2015/scraper.py", line 282, in downloadRecipe json.dump({'data' : 'data'}, out_file) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 180, in dump fp.write(chunk) TypeError: a bytes-like object is required, not 'str' ` `` in python 3 – user305883 Feb 15 '17 at 15:14
  • where does the resulting file 'data.txt' get saved? how do i access it on ubuntu 16.04? – kRazzy R Oct 24 '17 at 20:05
  • 1
    @kRazzyR Instead of `data.txt`, you can pass in any path of wherever you want the file. For instance, if your username is `krazzyr` on Ubuntu, you can pass in `'/home/krazzyr/Desktop/myfile.json'` and the file will appear on your desktop. If the path does not start with a slash (`/`), it's relative to your current working directory. For instance, if your working directory is `/home/krazzyr` and you pass in `data.txt`, the file name will be `/home/krazzyr/data.txt`. In a shell, you can type `pwd` to print the working directory. – phihag Oct 24 '17 at 20:26
  • 1
    @kRazzyR How you access it depends on what you want to do. Popular options include a command-line text editors like nano or vi, graphical text editors like emacs, kwrite, gedit, or sublime text, IDEs editors like vscode and Eclipse, command-line output with `cat` or `less` and colored or filtered display with `jq`. In general, in a command-line, type the program you want to run and then a path. Again, paths can be relative or absolute. For instance, if you saved your file to `/home/krazzyr/Desktop/myfile.json` and your working directory is `/home/krazzyr`, `gedit Desktop/myfile.json` will do. – phihag Oct 24 '17 at 20:30
  • Thank you for such a useful and detailed explanation . (I'm primarily an R user making a move to Python. ) So changing `open('data.txt', 'w')` to `open('/home/krazzyr/data.txt', 'w') ` would create a file 'data.txt' and save the file to `home/krazzyr/`. – kRazzy R Oct 24 '17 at 23:21
  • @kRazzyR Almost: Absolute paths start with a slash, so the file will end up in `/home/krazzyr` - note the leading slash. But don't be afraid of relative paths, they're often more useful. If you just specify `data.txt` and run the program while you're in `/home/krazzyr` (the default), your file will end up in `/home/krazzyr/` as well. But when I run your program while I'm in `/home/phihag/stackoverflow/`, it'll get it written to `/home/phihag/stackoverflow/`. That's useful because `/home/krazzyr/` might not exist on my system, and I want the program's output in a sub directory anyway. – phihag Oct 25 '17 at 06:08
  • In my program , I have a method that runs thrice. So instead of `w` I put `a`. Now how do I add a new line or something like that to the output file after each run, so that I can distinguish all the three different outputs in the file generated from this: `import json with open('data.txt', 'a') as outfile: json.dump(data, outfile)` – kRazzy R Nov 03 '17 at 15:31
  • @kRazzyR That sounds like a great question. Go ahead and [ask it on stackoverflow](https://stackoverflow.com/questions/ask)! – phihag Nov 03 '17 at 22:45
  • @kRazzyR if you do ask that as a question, please post a link to it in a comment here. It's related enough I think. – Mnebuerquo Jan 08 '18 at 15:47
  • yes it is. but by the time I found a solution, my question got flagged as duplicate. here it is : https://stackoverflow.com/questions/47140526/add-comments-or-new-lines-to-output-saved-in-a-file-each-time-method-function – kRazzy R Jan 08 '18 at 16:36
  • @Gulzar That's [a different question](https://stackoverflow.com/q/2835559/35070). (Spoiler: Use `json.load`) – phihag Dec 31 '18 at 07:58
  • Does the second method work on all the latest Windows, Macintosh, linux systems? – NoobCat Sep 18 '20 at 15:40
  • 1
    @Pastrokkio Yes, provided that you want to write the file as UTF-8, and you have Python 3. – phihag Sep 19 '20 at 18:11
  • what flag should I use `w`, `w+', 'a+'? I want to create the file if it doesn't exist and write to it from scratch each time. – Charlie Parker Sep 22 '20 at 17:11
  • # Here, we used "w" letter in our argument, which indicates write and will create a file if it does not exist in library # Plus sign indicates both read and write. – Charlie Parker Sep 22 '20 at 17:13
  • 1
    @CharlieParker Then `w` is correct. `w+` is very rarely correct; `+` means you want to later read from the same file pointer. – phihag Sep 22 '20 at 21:24
  • 1
    @CharlieParker Yes, `json.dump(data, open(path, 'w'))` would be the one-liner. However, omiting the `with` statement can [lead to problems in long-running programs](https://docs.quantifiedcode.com/python-anti-patterns/maintainability/not_using_with_to_open_files.html). In a ten-line script, there's no harm, but if you write code that can be used as a library, or a long-running application, you should use `with`. – phihag Feb 11 '21 at 20:14
  • @phihag let me see if I understand. So it's an issue because if I do the one liner I still have to close the file (but I don't even have a handle to it so I can't close it, so it could become corrupted...for some mystical reason?) – Charlie Parker Feb 11 '21 at 22:58
  • 1
    @CharlieParker The handle does not become _corrupted_, but it can linger in memory. In a tight loop, that might cause handle exhaustion. There is no guarantee when and whether Python collects (and then auto-closes) lingering handles. Also, if you haven't called `close` (either manually or in a `with` statement), then there is no guarantee that the file has been flushed, and the last bytes may be missing. If the file is on a network filesystem and not all bytes have been transmitted, then the line after `json.dump` may cause some other program to read the file, and find an incomplete file. – phihag Feb 12 '21 at 00:00
276

To get utf8-encoded file as opposed to ascii-encoded in the accepted answer for Python 2 use:

import io, json
with io.open('data.txt', 'w', encoding='utf-8') as f:
  f.write(json.dumps(data, ensure_ascii=False))

The code is simpler in Python 3:

import json
with open('data.txt', 'w') as f:
  json.dump(data, f, ensure_ascii=False)

On Windows, the encoding='utf-8' argument to open is still necessary.

To avoid storing an encoded copy of the data in memory (result of dumps) and to output utf8-encoded bytestrings in both Python 2 and 3, use:

import json, codecs
with open('data.txt', 'wb') as f:
    json.dump(data, codecs.getwriter('utf-8')(f), ensure_ascii=False)

The codecs.getwriter call is redundant in Python 3 but required for Python 2


Readability and size:

The use of ensure_ascii=False gives better readability and smaller size:

>>> json.dumps({'price': '€10'})
'{"price": "\\u20ac10"}'
>>> json.dumps({'price': '€10'}, ensure_ascii=False)
'{"price": "€10"}'

>>> len(json.dumps({'абвгд': 1}))
37
>>> len(json.dumps({'абвгд': 1}, ensure_ascii=False).encode('utf8'))
17

Further improve readability by adding flags indent=4, sort_keys=True (as suggested by dinos66) to arguments of dump or dumps. This way you'll get a nicely indented sorted structure in the json file at the cost of a slightly larger file size.

Antony Hatchkins
  • 25,545
  • 8
  • 96
  • 98
  • 5
    The `unicode` is superfluous - the result of `json.dumps` is already a unicode object. Note that this fails in 3.x, where this whole mess of output file mode has been cleaned up, and json always uses character strings (and character I/O) and never bytes. – phihag Feb 14 '13 at 11:20
  • 4
    In 2.x `type(json.dumps('a'))` is ``. Even `type(json.dumps('a', encoding='utf8'))` is ``. – Antony Hatchkins Feb 14 '13 at 11:25
  • 4
    Yes, in 3.x json uses strings, yet the default encoding is ascii. You have to explicitly tell it that you want `utf8` even in 3.x. Updated the answer. – Antony Hatchkins Feb 14 '13 at 11:39
  • Data providers such as twitter sometimes provide data with a variety of encoding. The code above works fine for all unicode tweets but there are some cases where Latin symbols appear and you get errors such as the following: 'UnicodeEncodeError: 'charmap' codec can't encode character '\xdc' in position 3088: character maps to ' Any ideas on what can be done in these cases? – dinos66 Jul 09 '15 at 13:08
  • @dinos66 Are you able to print the string to unicode console without this error? Try to localize the issue find least 5-7 bytes before and after the problematic symbol and quote them here. My version is that the string was incorrectly decoded into the unicode earlier. – Antony Hatchkins Jul 10 '15 at 04:47
  • @AntonyHatchkins Thank you for your interest. I managed to solve that by using the codecs lib. So my solution would be: with codecs.open('data.txt', 'w','utf8') as outfile: outfile.write(json.dumps(jsonData, sort_keys = True, ensure_ascii=False)) – dinos66 Jul 10 '15 at 14:37
  • 1
    The Python 3.x answer worked for me even though I'm using 2.7. The 2.x answer returned an error: `'ascii' codec can't decode byte 0xf1 in position 506755: ordinal not in range(128)`. So when in doubt, use the 3.x answer! – Blairg23 Dec 22 '15 at 18:44
  • @Blairg23 That's because you are dealing with non-ascii str in one-byte encoding. That is strongly advised against. Use `u''` notation for literals instead of `''`. For example `unicode('абвгд')` gives an error in python2.x: you have to either explicitly `'абвгд'.decode('utf8')` the string or use `u'абвгд'` notation. – Antony Hatchkins Dec 22 '15 at 19:58
  • to me ```codecs.getwriter``` was necessary in python 3. Otherwise: ```json.dump( recipe , ensure_ascii=False) TypeError: dump() missing 1 required positional argument: 'fp'``` – user305883 Feb 15 '17 at 15:19
  • @user305883 That's because you've missed a required positional argument, fp. `getwriter` is not necessary, but `f` is necessary. See my solution for python3.x – Antony Hatchkins Feb 16 '17 at 03:49
  • @Cas Thank you for your edit, I've fixed a couple of typos in it, but overall being written this way gives a much better reading experience :) – Antony Hatchkins Mar 05 '17 at 15:54
  • This worked for me when the above method was inserting control feed '\n' and backslashes '\' in my file. – Odysseus Ithaca Mar 30 '17 at 21:09
  • This is such a better answer than the accepted one for conveying the complexities of everything going on here. That said, I still don't fully understand why a solution using the encoding explicit context manager `with io.open('data.txt', 'w', encoding='utf-8') as f:` can't work for the `json.dump` approach, but nonetheless this helped me solve my problem with `codex.getwriter('utf-8')(f)` and binary access. I'm guessing it has something to do with what `json.dump` expects but in reading it I can't figure out what is actually doing that. – mpacer May 04 '17 at 00:25
  • For Python 3.6.1 I had to add keyword argument `encoding='utf-8'` to open in order to have correct utf-8 file encoding: `import json with open('data.txt', 'w') as f: json.dump(data, f, ensure_ascii=False)` – rik Jun 29 '17 at 15:02
  • @rik What is your OS? – Antony Hatchkins Jun 30 '17 at 10:18
  • @AntonyHatchkins Win 10 x64. – rik Jul 03 '17 at 15:46
  • @CharlieParker As soon as the file object gets out of the scope it is automatically closed by CPython. Other python implemtations might postpone this step, so it is considered more 'clean' to wrap it into the 'with' construct or close the file explicitly to avoid running out of the limit of the open files and/or to leaving the files locked for writing (eg you can't move or delete them until the program completes/notebook closed). In my opinion it is safe to use one-liner in CPython, but it I write it in the answer it will be eagerly downvoted by the unaware readers. – Antony Hatchkins Feb 12 '21 at 13:25
  • @AntonyHatchkins Thanks! CPython? Is that different from my normal python I use? I'm wondering if there is anything subtle in your response to me... – Charlie Parker Feb 12 '21 at 19:58
  • @CharlieParker CPython = "nomal python". No, nothing subtle. It's just a bad habit to use `open` without `with`. For example, if you keep file open in a jupyter notebook, you'll have trouble renaming or moving this file on windows. But actually that's the only negative consequence I'm aware of. – Antony Hatchkins Feb 15 '21 at 13:32
165

I would answer with slight modification with aforementioned answers and that is to write a prettified JSON file which human eyes can read better. For this, pass sort_keys as True and indent with 4 space characters and you are good to go. Also take care of ensuring that the ascii codes will not be written in your JSON file:

with open('data.txt', 'w') as outfile:
     json.dump(jsonData, outfile, sort_keys = True, indent = 4,
               ensure_ascii = False)
Antony Hatchkins
  • 25,545
  • 8
  • 96
  • 98
ambodi
  • 5,445
  • 2
  • 30
  • 21
  • 2
    still getting `UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' ` – Steve K Oct 13 '14 at 23:16
  • 1
    @SirBenBenji Ensure the string you are trying to write to follow: str.decode('utf-8'). – ambodi Apr 22 '15 at 09:08
  • 1
    @SirBenBenji You can try using codecs too, as dinos66 specifies below – Shiv Sep 03 '15 at 19:01
  • You also have to declare your encoding by adding `# -*- coding: utf-8 -*-` after the shebang – aesede Apr 02 '16 at 17:29
  • 2
    +1 for sort_keys and indent. @aesede It's no good to add this line because it will make in impression that this solution works with python2 as well which it doesn't (`UnicodeEncodeError` with non-ascii data). See [my solution](http://stackoverflow.com/a/14870531/237105) for details. – Antony Hatchkins Feb 10 '17 at 10:41
120

Read and write JSON files with Python 2+3; works with unicode

# -*- coding: utf-8 -*-
import json

# Make it work for Python 2+3 and with Unicode
import io
try:
    to_unicode = unicode
except NameError:
    to_unicode = str

# Define data
data = {'a list': [1, 42, 3.141, 1337, 'help', u'€'],
        'a string': 'bla',
        'another dict': {'foo': 'bar',
                         'key': 'value',
                         'the answer': 42}}

# Write JSON file
with io.open('data.json', 'w', encoding='utf8') as outfile:
    str_ = json.dumps(data,
                      indent=4, sort_keys=True,
                      separators=(',', ': '), ensure_ascii=False)
    outfile.write(to_unicode(str_))

# Read JSON file
with open('data.json') as data_file:
    data_loaded = json.load(data_file)

print(data == data_loaded)

Explanation of the parameters of json.dump:

  • indent: Use 4 spaces to indent each entry, e.g. when a new dict is started (otherwise all will be in one line),
  • sort_keys: sort the keys of dictionaries. This is useful if you want to compare json files with a diff tool / put them under version control.
  • separators: To prevent Python from adding trailing whitespaces

With a package

Have a look at my utility package mpu for a super simple and easy to remember one:

import mpu.io
data = mpu.io.read('example.json')
mpu.io.write('example.json', data)

Created JSON file

{
    "a list":[
        1,
        42,
        3.141,
        1337,
        "help",
        "€"
    ],
    "a string":"bla",
    "another dict":{
        "foo":"bar",
        "key":"value",
        "the answer":42
    }
}

Common file endings

.json

Alternatives

For your application, the following might be important:

  • Support by other programming languages
  • Reading / writing performance
  • Compactness (file size)

See also: Comparison of data serialization formats

In case you are rather looking for a way to make configuration files, you might want to read my short article Configuration files in Python

Martin Thoma
  • 91,837
  • 114
  • 489
  • 768
  • 2
    Note that `force_ascii` flag is `True` by default. You'll have unreadable 6-bytes `"\u20ac"` sequences for each `€` in your json file (as well as of any other non-ascii character). – Antony Hatchkins Feb 10 '17 at 11:13
  • Why do you use `open` for the reading but `io.open` for writing? Is it _possible_ to use `io.open` for reading as well? If so, what parameters should be passed? – Micah Zoltu Jun 05 '17 at 05:31
24

For those of you who are trying to dump greek or other "exotic" languages such as me but are also having problems (unicode errors) with weird characters such as the peace symbol (\u262E) or others which are often contained in json formated data such as Twitter's, the solution could be as follows (sort_keys is obviously optional):

import codecs, json
with codecs.open('data.json', 'w', 'utf8') as f:
     f.write(json.dumps(data, sort_keys = True, ensure_ascii=False))
dinos66
  • 666
  • 5
  • 14
  • 1
    +1 While docs recommends python3 builtin `open` and the assotiated `io.open` over `codecs.open`, in this case it is also a nice backwards-compatible hack. In python2 `codecs.open` is more "omnivorous" than io.open (it can "eat" both str and unicode, converting if necessary). One can say that this `codecs.open` quirk compensates for `json.dumps` quirk of generating different types of objects (`str`/`unicode`) depending on the presence of the unicode strings in the input. – Antony Hatchkins Feb 10 '17 at 11:06
14

Writing JSON to a File

import json

data = {}
data['people'] = []
data['people'].append({
    'name': 'Scott',
    'website': 'stackabuse.com',
    'from': 'Nebraska'
})
data['people'].append({
    'name': 'Larry',
    'website': 'google.com',
    'from': 'Michigan'
})
data['people'].append({
    'name': 'Tim',
    'website': 'apple.com',
    'from': 'Alabama'
})

with open('data.txt', 'w') as outfile:
    json.dump(data, outfile)

Reading JSON from a File

import json

with open('data.txt') as json_file:
    data = json.load(json_file)
    for p in data['people']:
        print('Name: ' + p['name'])
        print('Website: ' + p['website'])
        print('From: ' + p['from'])
        print('')
iman
  • 1,455
  • 1
  • 18
  • 31
  • 3
    Welcome to Stack Overflow. If you decide to answer an older question that has well established and correct answers, adding a new answer late in the day may not get you any credit. If you have some distinctive new information, or you're convinced the other answers are all wrong, by all means add a new answer, but 'yet another answer' giving the same basic information a long time after the question was asked usually won't earn you much credit. (You show some sample data; that's good, but I'm not sure it's enough, especially as you don't show what is produced for the sample data.) – Jonathan Leffler Dec 26 '19 at 09:05
  • 1
    I think the answer is ok because it contains more details and clarity. – M.Innat Dec 02 '20 at 07:54
11

I don't have enough reputation to add in comments, so I just write some of my findings of this annoying TypeError here:

Basically, I think it's a bug in the json.dump() function in Python 2 only - It can't dump a Python (dictionary / list) data containing non-ASCII characters, even you open the file with the encoding = 'utf-8' parameter. (i.e. No matter what you do). But, json.dumps() works on both Python 2 and 3.

To illustrate this, following up phihag's answer: the code in his answer breaks in Python 2 with exception TypeError: must be unicode, not str, if data contains non-ASCII characters. (Python 2.7.6, Debian):

import json
data = {u'\u0430\u0431\u0432\u0433\u0434': 1} #{u'абвгд': 1}
with open('data.txt', 'w') as outfile:
    json.dump(data, outfile)

It however works fine in Python 3.

Antony Hatchkins
  • 25,545
  • 8
  • 96
  • 98
ibic
  • 470
  • 8
  • 12
  • Give reasons when you claim something to be wrong. Use @nickname so the person gets notified. You cannot write comments, but you can read comments. As already stated in my answer to the first comment, try `data = {'asdf': 1}`. You'll get the notorious `TypeError` with your (second) variant. – Antony Hatchkins Feb 10 '17 at 08:55
  • Concerning `ensure_ascii` - it is necessary if you want to get a "real" utf8 output. Without it you'll have plain ascii with 6 bytes per russian letter as opposed to 2 bytes per character with this flag. – Antony Hatchkins Feb 10 '17 at 08:56
  • @AntonyHatchkins You are right for the `unicode()` part. I just realised for `io` package in Python 2, `write()` needs `unicode`, not `str`. – ibic Feb 12 '17 at 16:29
  • 1
    This code works for me even with python2.6.6, Debian (Dec 10 2010). As well as with python2.7.9 or python3. Check it once again, plz. – Antony Hatchkins Feb 21 '17 at 04:40
10

Write a data in file using JSON use json.dump() or json.dumps() used. write like this to store data in file.

import json
data = [1,2,3,4,5]
with open('no.txt', 'w') as txtfile:
    json.dump(data, txtfile)

this example in list is store to a file.

Vishal Gediya
  • 167
  • 1
  • 6
5
json.dump(data, open('data.txt', 'wb'))
Alexander
  • 87,529
  • 23
  • 162
  • 169
  • 2
    This does the same thing as @phihag's answer, but is not guaranteed to work at all times. Consider such code: `f = open('1.txt', 'w'); f.write('a'); input()`. Run it and then SYGTERM it (`Ctrl-Z` then `kill %1` on linux, `Ctrl-Break` on Windows). `1.txt` will have 0 bytes. It is because the writing was buffered and the file was neither flushed not closed at the moment when SYGTERM occurred. `with` block guarantees that the file always gets closed just like 'try/finally' block does but shorter. – Antony Hatchkins Feb 10 '17 at 10:27
5

To write the JSON with indentation, "pretty print":

import json

outfile = open('data.json')
json.dump(data, outfile, indent=4)

Also, if you need to debug improperly formatted JSON, and want a helpful error message, use import simplejson library, instead of import json (functions should be the same)

James Wierzba
  • 13,155
  • 9
  • 56
  • 97
2

if you are trying to write a pandas dataframe into a file using a json format i'd recommend this

destination='filepath'
saveFile = open(destination, 'w')
saveFile.write(df.to_json())
saveFile.close()
2

All previous answers are correct here is a very simple example:

#! /usr/bin/env python
import json

def write_json():
    # create a dictionary  
    student_data = {"students":[]}
    #create a list
    data_holder = student_data["students"]
    # just a counter
    counter = 0
    #loop through if you have multiple items..         
    while counter < 3:
        data_holder.append({'id':counter})
        data_holder.append({'room':counter})
        counter += 1    
    #write the file        
    file_path='/tmp/student_data.json'
    with open(file_path, 'w') as outfile:
        print("writing file to: ",file_path)
        # HERE IS WHERE THE MAGIC HAPPENS 
        json.dump(student_data, outfile)
    outfile.close()     
    print("done")

write_json()

enter image description here

grepit
  • 16,512
  • 5
  • 83
  • 71
1

The accepted answer is fine. However, I ran into "is not json serializable" error using that.

Here's how I fixed it with open("file-name.json", 'w') as output:

output.write(str(response))

Although it is not a good fix as the json file it creates will not have double quotes, however it is great if you are looking for quick and dirty.

ofundefined
  • 1,131
  • 1
  • 10
  • 25
Akshat Bajaj
  • 73
  • 1
  • 6
0

The JSON data can be written to a file as follows

hist1 = [{'val_loss': [0.5139984398465246],
'val_acc': [0.8002029867684085],
'loss': [0.593220705309384],
'acc': [0.7687131817929321]},
{'val_loss': [0.46456472964199463],
'val_acc': [0.8173602046780344],
'loss': [0.4932038113037539],
'acc': [0.8063946213802453]}]

Write to a file:

with open('text1.json', 'w') as f:
     json.dump(hist1, f)
Ashok Kumar Jayaraman
  • 2,171
  • 2
  • 24
  • 34