PHP 8.3.4 Released!

inflate_add

(PHP 7, PHP 8)

inflate_addIncrementally inflate encoded data

Descrição

inflate_add(InflateContext $context, string $data, int $flush_mode = ZLIB_SYNC_FLUSH): string|false

Incrementally inflates encoded data in the specified context.

Limitation: header information from GZIP compressed data are not made available.

Parâmetros

context

A context created with inflate_init().

data

A chunk of compressed data.

flush_mode

One of ZLIB_BLOCK, ZLIB_NO_FLUSH, ZLIB_PARTIAL_FLUSH, ZLIB_SYNC_FLUSH (default), ZLIB_FULL_FLUSH, ZLIB_FINISH. Normally you will want to set ZLIB_NO_FLUSH to maximize compression, and ZLIB_FINISH to terminate with the last chunk of data. See the » zlib manual for a detailed description of these constants.

Valor Retornado

Returns a chunk of uncompressed data, ou false em caso de falha.

Erros/Exceções

If invalid parameters are given, inflating the data requires a preset dictionary, but none is specified, the compressed stream is corrupt or has an invalid checksum, an error of level E_WARNING is generated.

Registro de Alterações

Versão Descrição
8.0.0 context expects an InflateContext instance now; previously, a resource was expected.

Veja Também

add a note

User Contributed Notes 1 note

up
0
burp at -only-in-German-fuerspam dot de
10 months ago
It's not obvious how to use this for _incremental_ decompression:
You feed _the compressed data_ into inflate_add() _in pieces_.
The internal state of the zlib context will make sure than you can split at any point and still get the correct total data out, as long as you keep reading until the end.

In this way, you don't have to hold the complete uncompressed data in memory at any one time (and don't have to materialize it either as a file for gzopen() etc.), allowing you to parse files much bigger than the available php memory limit.

<?php
/* a good step size depends on the input's level of compression,
unfortunately there's no obvious way to know that beforehand;
in doubt instead choose a rather small value and glue the pieces together,
until there's enough data for processing */
$step = 500000;

$dataGz = load_gzip_compressed_data_to_string();

$start = 0;
$outLen = 0;
$ctxt = inflate_init(ZLIB_ENCODING_GZIP);
$status = inflate_get_status($inflCtxt);

while(
$status == ZLIB_OK) {
$split = substr($dataGz, $start, $step);
$dataFragment = inflate_add($inflCtxt, $split);
/* process fragment, potentially keep parts across iterations */
$outLen += strlen($dataFragment);
$status = inflate_get_status($inflCtxt);
$start += $step;
}
echo
'Input: ' . strlen($dataGz) . ' Bytes / Output: ' . $outLen . ' Bytes.';
?>

N.B.: Archives of extremely high compression will still bomb out with a stupid and unnecessary memory exhaustion, as it's not possible to define a limit in inflate_init() similar to gzuncompress().
To Top