Skip to content

Commit

Permalink
update manual
Browse files Browse the repository at this point in the history
  • Loading branch information
JoaoAparicio committed Nov 4, 2023
1 parent 0fa6fba commit bc9169e
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions docs/src/manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,14 @@ using Arrow
table = Arrow.Table("data.arrow")
```

Optionally the `filtercolumns` keyword argument will select which columns to load:

```
table = Arrow.Table("data.arrow"; filtercolumns = ["col1", "col3"])
```

This can be a significant performance improvement when the data is compressed and some columns are not needed.

### `Arrow.Table`

The type of `table` in this example will be an `Arrow.Table`. When "reading" the arrow data, `Arrow.Table` first ["mmapped"](https://en.wikipedia.org/wiki/Mmap) the `data.arrow` file, which is an important technique for dealing with data larger than available RAM on a system. By "mmapping" a file, the OS doesn't actually load the entire file contents into RAM at the same time, but file contents are "swapped" into RAM as different regions of a file are requested. Once "mmapped", `Arrow.Table` then inspected the metadata in the file to determine the number of columns, their names and types, at which byte offset each column begins in the file data, and even how many "batches" are included in this file (arrow tables may be partitioned into one or more "record batches" each containing portions of the data). Armed with all the appropriate metadata, `Arrow.Table` then created custom array objects ([`Arrow.ArrowVector`](@ref)), which act as "views" into the raw arrow memory bytes. This is a significant point in that no extra memory is allocated for "data" when reading arrow data. This is in contrast to if we wanted to read data from a csv file as columns into Julia structures; we would need to allocate those array structures ourselves, then parse the file, "filling in" each element of the array with the data we parsed from the file. Arrow data, on the other hand, is *already laid out in memory or on disk* in a binary format, and as long as we have the metadata to interpret the raw bytes, we can figure out whether to treat those bytes as a `Vector{Float64}`, etc. A sample of the kinds of arrow array types you might see when deserializing arrow data, include:
Expand Down

0 comments on commit bc9169e

Please sign in to comment.