C#'s JsonSerializer memory issues when serializing large string and byte array values

JsonSerializer is the "de-facto" JSON serializer (and deserializer) in modern C# and .NET. It is known for its performance compared to alternatives like Newtonsoft.JSON. However, at the time of this writing, it has a known issue when serializing large values because it buffers the entire value in memory. See: github.com/dotnet/runtime/issues/67337

This may lead to increased memory usage, allocation of large objects, increased garbage collector (GC) activity, increased CPU usage and drop in latency. This issue mostly affects large string and byte array values because they can be "arbitrarily large" (well, technically there's maximum payload size that JsonSerializer allows you to write).

Mitigation

If you're facing this issue, the following may help:

Consider redesigning your API to avoid large string or byte array values in the payload. Consider providing a URL instead so that users or clients can use the URL to download the contents as a binary stream after having read the JSON payload.

Configure the JsonSerializer to use the UnsafeRelaxedJsonEscaping encoder using JsonSerializerOptions.Encoder. This is the default encoder when using ASP.NET Core, but it's not the default when using JsonSerializer directory. The encoder is responsible for escaping characters (e.g. replacing new line with \n or 🌄 with \uD83C\uDF04). The default encoder is more aggressive and escapes more characters. JsonSerializer allocates (actually rents, but more on this later) extra memory when a string contains characters that need to be escaped. Using a more relaxed encoder can reduce the chances that string values will need escaping and therefore reduce allocations. However, it does not fix the overall problem, it only reduces the impact.

Engage in the discussion thread of the issue above. Maybe that can lead to it bumping in priority. Maybe not. I don't know.

As a last-ditch effort, you can create a custom JsonConverter that bypasses the low-level Utf8JsonWriter when it detects large strings or byte arrays, and instead write the values directly to the underlying buffer in smaller chunks and flushing the buffer to the output stream periodically. This is a risky path, it is error-prone, can lead to invalid JSON output and requires a lot of manual work to get right. You'd have to disable Utf8JsonWriter validation, handle escaping, UTF8 and Base64 encoding yourself and ensure that separators (i.e. the , between JSON values) are written correctly. This is a nightmare for maintainability.

Additional details

JsonSerializer uses the Utf8JsonWriter under the hood to write the JSON payload. All its Write method like WriteNumberValue, WriteStringValue, etc. write directly to a memory buffer synchronously. But periodically, in between writing the next value, JsonSerializer will check whether the buffer has reached a certain threshold and flush the contents to the output stream (e.g. the output file, the network stream, etc.). This pattern makes JsonSerializer relatively efficient and minimizes I/O operations (i.e. writing to the stream too frequently) or the overhead of async operations (Utf8JsonWriter is purely synchronous).

However, since Utf8JsonWriter does not provide means to write a partial value, when you have a large value (say a string with a million characters or byte array with a million of bytes), it will write all of that to the buffer. If the data is larger than the buffer's current size, it will "grow" the buffer by allocating a buffer larger enough to fit the value it needs to write then copy the data from the "old" buffer to the new larger buffer.

Now, to be clear, it does not actually allocate the array buffers directly. It uses array pooling. .NET provides a built-in shared array pool that allows reusing and recycling arrays instead of creating a new one every time. When you request an array from the array pool, it will check if there's a free one that's large enough for the size you requested. If it finds such an array, it hands it to you, otherwise it creates a new one. However, if you have concurrent requests writing really large values at the same time on different threads, the chances of finding free arrays in the pool decreases and consequently the chances of allocating new arrays increase.

When writing a byte array as string value, it will encode the value as Base64 before writing it to the output. It rents a buffer large enough to allocate the Base64-encoded data, which is usually 33% larger than the original data. So, if you are writing a value with 100,000 bytes, it may allocate a 133,333-byte array to store the Base64-encoded output in before it flushes it to the stream (see source code).

Incidentally, while looking at the code linked above, I noticed that it was unnecessarily allocating an extra buffer and performing a memory copy and raised the issue (see: github.com/dotnet/runtime/issues/97628) and made a pull request to fix it (see: github.com/dotnet/runtime/pull/97687).

The following image below is taken from Visual Studio's object allocation profiler. It shows 60+ 2,097,152 long byte arrays allocated in the large object heap (more on this below). These were allocated after making 1000 concurrent requests from 50 requests. Each request returns a payload that contains a 1,000,000 long byte array.

alt text

When writing a string value, it will encode it to UTF-8 before writing it to the output. C# strings are UTF-16 encoded; each string char takes up 2 bytes. But JsonSerializer needs to output UTF-8 encoded bytes. In the best case, each character will take up only 1 byte in UTF-8 (e.g. when text includes ASCII only characters), in the worst case, the encoded text will be 3x source.Length bytes (see: source code). It will rent a buffer to store the encoded bytes before writing it to the output.

The following image shows byte arrays allocated in the large-object heap as a result of serializing payloads that contain strings with 1,000,000 characters.

alt text

If the string has characters that need to be escaped, it will rent a buffer to perform the escaping into before performing the UTF-8 encoding. In the worst case the escaped buffer will be up to 6 times the size of the original string (before accounting for UTF-8 encoding) (see source code). This is because a character like '你' is escaped into '\u4F60', which is 6 characters long (note that when using the relaxed encoder, this particular character will not be escaped).

The following two images show the arrays allocated for escaping and writing string values. The first image highlights the char arrays used for escaping and the second image highlights the byte arrays allocated to write the UTF-8 encoding of the escaped characters.

alt text

alt text

For such large arrays, these allocations go into the large-object heap (LOH). This is a logical area of the heap where the GC stores objects larger than 85,000 bytes (configurable). The LOH is not compacted by default during garbage collection, meaning that area of memory is more likely to fragmented, which makes it harder to find contiguous space for new allocations even if there's technically enough free space. Objects in LOH are garbage-collected as part of "Generation 2", which is a more expensive cleanup usually reserved for long-lived objects.

You can find the code used for experiments here.