We now set bInheritHandles to true when creating them, which makes it
possible to pass them to subprocesses. Before, it would fail silently
in strange ways often simply losing data. Also added flags to disable
OVERLAPPED_IO on windows and O_NONBLOCK on posix.
The correct resizing behavior for arrays and buffers was done for
`janet_putindex` but not for `janet_put`. This change copies the correct
behavior to `janet_put`.
The macro would previously choke on `(varfn abc {:abc 123} ...)` due
to mishandling of structs and tables as metadata. This caused issues
when running `janet -d` with spork.
Instead of manually putting bindings into environment tables, just use
`setdyn` to set the bindings based on the command line flags. This
results in easier to understand behavior and prevents "swallowing" of
options. For example, linting and debug flags should be set in the
root-env by default so that they are used in all loaded modules.
This does create a lot of warnings, especially in the test suite, but
should improve code and point out real issues.
To disable individual messages, either disable linting, add the metadata :unused to a
binding, or add the prefix "_" to a symbol.
An optimization to ellude creation of intermediate tuples was
erroneously flagging a splice as invalid, even though it was valid.
Instead, if we see splice on the rhs, bail out of the optimization.
Sometimes a unix socket has a 0 status return which indicates the connection
immediately succeeded, at which point entering the event loop waiting on the
connection to complete actually breaks things.
It seems on FreeBSD with events being edge triggered, we're awaiting a
connection to signal it's writeable (to complete the connection) but that
never occurs (the event already took place before we registered for the
event). Going by status alone to determine if we should enter into the event
loop to await the complete connection seems sensible here.