blob: 71d0fab411f6fb3fc976f8e31b42fe6d0d055761 [file] [log] [blame]
/* C++ modules. Experimental!
Copyright (C) 2017-2021 Free Software Foundation, Inc.
Written by Nathan Sidwell <nathan@acm.org> while at FaceBook
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3, or (at your option)
any later version.
GCC is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>. */
/* Comments in this file have a non-negligible chance of being wrong
or at least inaccurate. Due to (a) my misunderstanding, (b)
ambiguities that I have interpretted differently to original intent
(c) changes in the specification, (d) my poor wording, (e) source
changes. */
/* (Incomplete) Design Notes
A hash table contains all module names. Imported modules are
present in a modules array, which by construction places an
import's dependencies before the import itself. The single
exception is the current TU, which always occupies slot zero (even
when it is not a module).
Imported decls occupy an entity_ary, an array of binding_slots, indexed
by importing module and index within that module. A flat index is
used, as each module reserves a contiguous range of indices.
Initially each slot indicates the CMI section containing the
streamed decl. When the decl is imported it will point to the decl
itself.
Additionally each imported decl is mapped in the entity_map via its
DECL_UID to the flat index in the entity_ary. Thus we can locate
the index for any imported decl by using this map and then
de-flattening the index via a binary seach of the module vector.
Cross-module references are by (remapped) module number and
module-local index.
Each importable DECL contains several flags. The simple set are
DECL_EXPORT_P, DECL_MODULE_PURVIEW_P and DECL_MODULE_IMPORT_P. The
first indicates whether it is exported, the second whether it is in
the module purview (as opposed to the global module fragment), and
the third indicates whether it was an import into this TU or not.
The more detailed flags are DECL_MODULE_PARTITION_P,
DECL_MODULE_ENTITY_P. The first is set in a primary interface unit
on decls that were read from module partitions (these will have
DECL_MODULE_IMPORT_P set too). Such decls will be streamed out to
the primary's CMI. DECL_MODULE_ENTITY_P is set when an entity is
imported, even if it matched a non-imported entity. Such a decl
will not have DECL_MODULE_IMPORT_P set, even though it has an entry
in the entity map and array.
Header units are module-like.
For namespace-scope lookup, the decls for a particular module are
held located in a sparse array hanging off the binding of the name.
This is partitioned into two: a few fixed slots at the start
followed by the sparse slots afterwards. By construction we only
need to append new slots to the end -- there is never a need to
insert in the middle. The fixed slots are MODULE_SLOT_CURRENT for
the current TU (regardless of whether it is a module or not),
MODULE_SLOT_GLOBAL and MODULE_SLOT_PARTITION. These latter two
slots are used for merging entities across the global module and
module partitions respectively. MODULE_SLOT_PARTITION is only
present in a module. Neither of those two slots is searched during
name lookup -- they are internal use only. This vector is created
lazily once we require it, if there is only a declaration from the
current TU, a regular binding is present. It is converted on
demand.
OPTIMIZATION: Outside of the current TU, we only need ADL to work.
We could optimize regular lookup for the current TU by glomming all
the visible decls on its slot. Perhaps wait until design is a
little more settled though.
There is only one instance of each extern-linkage namespace. It
appears in every module slot that makes it visible. It also
appears in MODULE_SLOT_GLOBAL. (It is an ODR violation if they
collide with some other global module entity.) We also have an
optimization that shares the slot for adjacent modules that declare
the same such namespace.
A module interface compilation produces a Compiled Module Interface
(CMI). The format used is Encapsulated Lazy Records Of Numbered
Declarations, which is essentially ELF's section encapsulation. (As
all good nerds are aware, Elrond is half Elf.) Some sections are
named, and contain information about the module as a whole (indices
etc), and other sections are referenced by number. Although I
don't defend against actively hostile CMIs, there is some
checksumming involved to verify data integrity. When dumping out
an interface, we generate a graph of all the
independently-redeclarable DECLS that are needed, and the decls
they reference. From that we determine the strongly connected
components (SCC) within this TU. Each SCC is dumped to a separate
numbered section of the CMI. We generate a binding table section,
mapping each namespace&name to a defining section. This allows
lazy loading.
Lazy loading employs mmap to map a read-only image of the CMI.
It thus only occupies address space and is paged in on demand,
backed by the CMI file itself. If mmap is unavailable, regular
FILEIO is used. Also, there's a bespoke ELF reader/writer here,
which implements just the section table and sections (including
string sections) of a 32-bit ELF in host byte-order. You can of
course inspect it with readelf. I figured 32-bit is sufficient,
for a single module. I detect running out of section numbers, but
do not implement the ELF overflow mechanism. At least you'll get
an error if that happens.
We do not separate declarations and definitions. My guess is that
if you refer to the declaration, you'll also need the definition
(template body, inline function, class definition etc). But this
does mean we can get larger SCCs than if we separated them. It is
unclear whether this is a win or not.
Notice that we embed section indices into the contents of other
sections. Thus random manipulation of the CMI file by ELF tools
may well break it. The kosher way would probably be to introduce
indirection via section symbols, but that would require defining a
relocation type.
Notice that lazy loading of one module's decls can cause lazy
loading of other decls in the same or another module. Clearly we
want to avoid loops. In a correct program there can be no loops in
the module dependency graph, and the above-mentioned SCC algorithm
places all intra-module circular dependencies in the same SCC. It
also orders the SCCs wrt each other, so dependent SCCs come first.
As we load dependent modules first, we know there can be no
reference to a higher-numbered module, and because we write out
dependent SCCs first, likewise for SCCs within the module. This
allows us to immediately detect broken references. When loading,
we must ensure the rest of the compiler doesn't cause some
unconnected load to occur (for instance, instantiate a template).
Classes used:
dumper - logger
data - buffer
bytes - data streamer
bytes_in : bytes - scalar reader
bytes_out : bytes - scalar writer
elf - ELROND format
elf_in : elf - ELROND reader
elf_out : elf - ELROND writer
trees_in : bytes_in - tree reader
trees_out : bytes_out - tree writer
depset - dependency set
depset::hash - hash table of depsets
depset::tarjan - SCC determinator
uidset<T> - set T's related to a UID
uidset<T>::hash hash table of uidset<T>
loc_spans - location map data
module_state - module object
slurping - data needed during loading
macro_import - imported macro data
macro_export - exported macro data
The ELROND objects use mmap, for both reading and writing. If mmap
is unavailable, fileno IO is used to read and write blocks of data.
The mapper object uses fileno IO to communicate with the server or
program. */
/* In expermental (trunk) sources, MODULE_VERSION is a #define passed
in from the Makefile. It records the modification date of the
source directory -- that's the only way to stay sane. In release
sources, we (plan to) use the compiler's major.minor versioning.
While the format might not change between at minor versions, it
seems simplest to tie the two together. There's no concept of
inter-version compatibility. */
#define IS_EXPERIMENTAL(V) ((V) >= (1U << 20))
#define MODULE_MAJOR(V) ((V) / 10000)
#define MODULE_MINOR(V) ((V) % 10000)
#define EXPERIMENT(A,B) (IS_EXPERIMENTAL (MODULE_VERSION) ? (A) : (B))
#ifndef MODULE_VERSION
#include "bversion.h"
#define MODULE_VERSION (BUILDING_GCC_MAJOR * 10000U + BUILDING_GCC_MINOR)
#elif !IS_EXPERIMENTAL (MODULE_VERSION)
#error "This is not the version I was looking for."
#endif
#define _DEFAULT_SOURCE 1 /* To get TZ field of struct tm, if available. */
#include "config.h"
#define INCLUDE_STRING
#define INCLUDE_VECTOR
#include "system.h"
#include "coretypes.h"
#include "cp-tree.h"
#include "timevar.h"
#include "stringpool.h"
#include "dumpfile.h"
#include "bitmap.h"
#include "cgraph.h"
#include "tree-iterator.h"
#include "cpplib.h"
#include "mkdeps.h"
#include "incpath.h"
#include "libiberty.h"
#include "stor-layout.h"
#include "version.h"
#include "tree-diagnostic.h"
#include "toplev.h"
#include "opts.h"
#include "attribs.h"
#include "intl.h"
#include "langhooks.h"
/* This TU doesn't need or want to see the networking. */
#define CODY_NETWORKING 0
#include "mapper-client.h"
#if 0 // 1 for testing no mmap
#define MAPPED_READING 0
#define MAPPED_WRITING 0
#else
#if HAVE_MMAP_FILE && _POSIX_MAPPED_FILES > 0
/* mmap, munmap. */
#define MAPPED_READING 1
#if HAVE_SYSCONF && defined (_SC_PAGE_SIZE)
/* msync, sysconf (_SC_PAGE_SIZE), ftruncate */
/* posix_fallocate used if available. */
#define MAPPED_WRITING 1
#else
#define MAPPED_WRITING 0
#endif
#else
#define MAPPED_READING 0
#define MAPPED_WRITING 0
#endif
#endif
/* Some open(2) flag differences, what a colourful world it is! */
#if defined (O_CLOEXEC)
// OK
#elif defined (_O_NOINHERIT)
/* Windows' _O_NOINHERIT matches O_CLOEXEC flag */
#define O_CLOEXEC _O_NOINHERIT
#else
#define O_CLOEXEC 0
#endif
#if defined (O_BINARY)
// Ok?
#elif defined (_O_BINARY)
/* Windows' open(2) call defaults to text! */
#define O_BINARY _O_BINARY
#else
#define O_BINARY 0
#endif
static inline cpp_hashnode *cpp_node (tree id)
{
return CPP_HASHNODE (GCC_IDENT_TO_HT_IDENT (id));
}
static inline tree identifier (const cpp_hashnode *node)
{
/* HT_NODE() expands to node->ident that HT_IDENT_TO_GCC_IDENT()
then subtracts a nonzero constant, deriving a pointer to
a different member than ident. That's strictly undefined
and detected by -Warray-bounds. Suppress it. See PR 101372. */
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Warray-bounds"
return HT_IDENT_TO_GCC_IDENT (HT_NODE (const_cast<cpp_hashnode *> (node)));
#pragma GCC diagnostic pop
}
/* Id for dumping module information. */
int module_dump_id;
/* We have a special module owner. */
#define MODULE_UNKNOWN (~0U) /* Not yet known. */
/* Prefix for section names. */
#define MOD_SNAME_PFX ".gnu.c++"
/* Format a version for user consumption. */
typedef char verstr_t[32];
static void
version2string (unsigned version, verstr_t &out)
{
unsigned major = MODULE_MAJOR (version);
unsigned minor = MODULE_MINOR (version);
if (IS_EXPERIMENTAL (version))
sprintf (out, "%04u/%02u/%02u-%02u:%02u%s",
2000 + major / 10000, (major / 100) % 100, (major % 100),
minor / 100, minor % 100,
EXPERIMENT ("", " (experimental)"));
else
sprintf (out, "%u.%u", major, minor);
}
/* Include files to note translation for. */
static vec<const char *, va_heap, vl_embed> *note_includes;
/* Modules to note CMI pathames. */
static vec<const char *, va_heap, vl_embed> *note_cmis;
/* Traits to hash an arbitrary pointer. Entries are not deletable,
and removal is a noop (removal needed upon destruction). */
template <typename T>
struct nodel_ptr_hash : pointer_hash<T>, typed_noop_remove <T *> {
/* Nothing is deletable. Everything is insertable. */
static bool is_deleted (T *) { return false; }
static void mark_deleted (T *) { gcc_unreachable (); }
};
/* Map from pointer to signed integer. */
typedef simple_hashmap_traits<nodel_ptr_hash<void>, int> ptr_int_traits;
typedef hash_map<void *,signed,ptr_int_traits> ptr_int_hash_map;
/********************************************************************/
/* Basic streaming & ELF. Serialization is usually via mmap. For
writing we slide a buffer over the output file, syncing it
approproiately. For reading we simply map the whole file (as a
file-backed read-only map -- it's just address space, leaving the
OS pager to deal with getting the data to us). Some buffers need
to be more conventional malloc'd contents. */
/* Variable length buffer. */
class data {
public:
class allocator {
public:
/* Tools tend to moan if the dtor's not virtual. */
virtual ~allocator () {}
public:
void grow (data &obj, unsigned needed, bool exact);
void shrink (data &obj);
public:
virtual char *grow (char *ptr, unsigned needed);
virtual void shrink (char *ptr);
};
public:
char *buffer; /* Buffer being transferred. */
/* Although size_t would be the usual size, we know we never get
more than 4GB of buffer -- because that's the limit of the
encapsulation format. And if you need bigger imports, you're
doing it wrong. */
unsigned size; /* Allocated size of buffer. */
unsigned pos; /* Position in buffer. */
public:
data ()
:buffer (NULL), size (0), pos (0)
{
}
~data ()
{
/* Make sure the derived and/or using class know what they're
doing. */
gcc_checking_assert (!buffer);
}
protected:
char *use (unsigned count)
{
if (size < pos + count)
return NULL;
char *res = &buffer[pos];
pos += count;
return res;
}
public:
void unuse (unsigned count)
{
pos -= count;
}
public:
static allocator simple_memory;
};
/* The simple data allocator. */
data::allocator data::simple_memory;
/* Grow buffer to at least size NEEDED. */
void
data::allocator::grow (data &obj, unsigned needed, bool exact)
{
gcc_checking_assert (needed ? needed > obj.size : !obj.size);
if (!needed)
/* Pick a default size. */
needed = EXPERIMENT (100, 1000);
if (!exact)
needed *= 2;
obj.buffer = grow (obj.buffer, needed);
if (obj.buffer)
obj.size = needed;
else
obj.pos = obj.size = 0;
}
/* Free a buffer. */
void
data::allocator::shrink (data &obj)
{
shrink (obj.buffer);
obj.buffer = NULL;
obj.size = 0;
}
char *
data::allocator::grow (char *ptr, unsigned needed)
{
return XRESIZEVAR (char, ptr, needed);
}
void
data::allocator::shrink (char *ptr)
{
XDELETEVEC (ptr);
}
/* Byte streamer base. Buffer with read/write position and smarts
for single bits. */
class bytes : public data {
public:
typedef data parent;
protected:
uint32_t bit_val; /* Bit buffer. */
unsigned bit_pos; /* Next bit in bit buffer. */
public:
bytes ()
:parent (), bit_val (0), bit_pos (0)
{}
~bytes ()
{
}
protected:
unsigned calc_crc (unsigned) const;
protected:
/* Finish bit packet. Rewind the bytes not used. */
unsigned bit_flush ()
{
gcc_assert (bit_pos);
unsigned bytes = (bit_pos + 7) / 8;
unuse (4 - bytes);
bit_pos = 0;
bit_val = 0;
return bytes;
}
};
/* Calculate the crc32 of the buffer. Note the CRC is stored in the
first 4 bytes, so don't include them. */
unsigned
bytes::calc_crc (unsigned l) const
{
unsigned crc = 0;
for (size_t ix = 4; ix < l; ix++)
crc = crc32_byte (crc, buffer[ix]);
return crc;
}
class elf_in;
/* Byte stream reader. */
class bytes_in : public bytes {
typedef bytes parent;
protected:
bool overrun; /* Sticky read-too-much flag. */
public:
bytes_in ()
: parent (), overrun (false)
{
}
~bytes_in ()
{
}
public:
/* Begin reading a named section. */
bool begin (location_t loc, elf_in *src, const char *name);
/* Begin reading a numbered section with optional name. */
bool begin (location_t loc, elf_in *src, unsigned, const char * = NULL);
/* Complete reading a buffer. Propagate errors and return true on
success. */
bool end (elf_in *src);
/* Return true if there is unread data. */
bool more_p () const
{
return pos != size;
}
public:
/* Start reading at OFFSET. */
void random_access (unsigned offset)
{
if (offset > size)
set_overrun ();
pos = offset;
bit_pos = bit_val = 0;
}
public:
void align (unsigned boundary)
{
if (unsigned pad = pos & (boundary - 1))
read (boundary - pad);
}
public:
const char *read (unsigned count)
{
char *ptr = use (count);
if (!ptr)
set_overrun ();
return ptr;
}
public:
bool check_crc () const;
/* We store the CRC in the first 4 bytes, using host endianness. */
unsigned get_crc () const
{
return *(const unsigned *)&buffer[0];
}
public:
/* Manipulate the overrun flag. */
bool get_overrun () const
{
return overrun;
}
void set_overrun ()
{
overrun = true;
}
public:
unsigned u32 (); /* Read uncompressed integer. */
public:
bool b (); /* Read a bool. */
void bflush (); /* Completed a block of bools. */
private:
void bfill (); /* Get the next block of bools. */
public:
int c (); /* Read a char. */
int i (); /* Read a signed int. */
unsigned u (); /* Read an unsigned int. */
size_t z (); /* Read a size_t. */
HOST_WIDE_INT wi (); /* Read a HOST_WIDE_INT. */
unsigned HOST_WIDE_INT wu (); /* Read an unsigned HOST_WIDE_INT. */
const char *str (size_t * = NULL); /* Read a string. */
const void *buf (size_t); /* Read a fixed-length buffer. */
cpp_hashnode *cpp_node (); /* Read a cpp node. */
};
/* Verify the buffer's CRC is correct. */
bool
bytes_in::check_crc () const
{
if (size < 4)
return false;
unsigned c_crc = calc_crc (size);
if (c_crc != get_crc ())
return false;
return true;
}
class elf_out;
/* Byte stream writer. */
class bytes_out : public bytes {
typedef bytes parent;
public:
allocator *memory; /* Obtainer of memory. */
public:
bytes_out (allocator *memory)
: parent (), memory (memory)
{
}
~bytes_out ()
{
}
public:
bool streaming_p () const
{
return memory != NULL;
}
public:
void set_crc (unsigned *crc_ptr);
public:
/* Begin writing, maybe reserve space for CRC. */
void begin (bool need_crc = true);
/* Finish writing. Spill to section by number. */
unsigned end (elf_out *, unsigned, unsigned *crc_ptr = NULL);
public:
void align (unsigned boundary)
{
if (unsigned pad = pos & (boundary - 1))
write (boundary - pad);
}
public:
char *write (unsigned count, bool exact = false)
{
if (size < pos + count)
memory->grow (*this, pos + count, exact);
return use (count);
}
public:
void u32 (unsigned); /* Write uncompressed integer. */
public:
void b (bool); /* Write bool. */
void bflush (); /* Finish block of bools. */
public:
void c (unsigned char); /* Write unsigned char. */
void i (int); /* Write signed int. */
void u (unsigned); /* Write unsigned int. */
void z (size_t s); /* Write size_t. */
void wi (HOST_WIDE_INT); /* Write HOST_WIDE_INT. */
void wu (unsigned HOST_WIDE_INT); /* Write unsigned HOST_WIDE_INT. */
void str (const char *ptr)
{
str (ptr, strlen (ptr));
}
void cpp_node (const cpp_hashnode *node)
{
str ((const char *)NODE_NAME (node), NODE_LEN (node));
}
void str (const char *, size_t); /* Write string of known length. */
void buf (const void *, size_t); /* Write fixed length buffer. */
void *buf (size_t); /* Create a writable buffer */
public:
/* Format a NUL-terminated raw string. */
void printf (const char *, ...) ATTRIBUTE_PRINTF_2;
void print_time (const char *, const tm *, const char *);
public:
/* Dump instrumentation. */
static void instrument ();
protected:
/* Instrumentation. */
static unsigned spans[4];
static unsigned lengths[4];
static int is_set;
};
/* Instrumentation. */
unsigned bytes_out::spans[4];
unsigned bytes_out::lengths[4];
int bytes_out::is_set = -1;
/* If CRC_PTR non-null, set the CRC of the buffer. Mix the CRC into
that pointed to by CRC_PTR. */
void
bytes_out::set_crc (unsigned *crc_ptr)
{
if (crc_ptr)
{
gcc_checking_assert (pos >= 4);
unsigned crc = calc_crc (pos);
unsigned accum = *crc_ptr;
/* Only mix the existing *CRC_PTR if it is non-zero. */
accum = accum ? crc32_unsigned (accum, crc) : crc;
*crc_ptr = accum;
/* Buffer will be sufficiently aligned. */
*(unsigned *)buffer = crc;
}
}
/* Finish a set of bools. */
void
bytes_out::bflush ()
{
if (bit_pos)
{
u32 (bit_val);
lengths[2] += bit_flush ();
}
spans[2]++;
is_set = -1;
}
void
bytes_in::bflush ()
{
if (bit_pos)
bit_flush ();
}
/* When reading, we don't know how many bools we'll read in. So read
4 bytes-worth, and then rewind when flushing if we didn't need them
all. You can't have a block of bools closer than 4 bytes to the
end of the buffer. */
void
bytes_in::bfill ()
{
bit_val = u32 ();
}
/* Bools are packed into bytes. You cannot mix bools and non-bools.
You must call bflush before emitting another type. So batch your
bools.
It may be worth optimizing for most bools being zero. Some kind of
run-length encoding? */
void
bytes_out::b (bool x)
{
if (is_set != x)
{
is_set = x;
spans[x]++;
}
lengths[x]++;
bit_val |= unsigned (x) << bit_pos++;
if (bit_pos == 32)
{
u32 (bit_val);
lengths[2] += bit_flush ();
}
}
bool
bytes_in::b ()
{
if (!bit_pos)
bfill ();
bool v = (bit_val >> bit_pos++) & 1;
if (bit_pos == 32)
bit_flush ();
return v;
}
/* Exactly 4 bytes. Used internally for bool packing and a few other
places. We can't simply use uint32_t because (a) alignment and
(b) we need little-endian for the bool streaming rewinding to make
sense. */
void
bytes_out::u32 (unsigned val)
{
if (char *ptr = write (4))
{
ptr[0] = val;
ptr[1] = val >> 8;
ptr[2] = val >> 16;
ptr[3] = val >> 24;
}
}
unsigned
bytes_in::u32 ()
{
unsigned val = 0;
if (const char *ptr = read (4))
{
val |= (unsigned char)ptr[0];
val |= (unsigned char)ptr[1] << 8;
val |= (unsigned char)ptr[2] << 16;
val |= (unsigned char)ptr[3] << 24;
}
return val;
}
/* Chars are unsigned and written as single bytes. */
void
bytes_out::c (unsigned char v)
{
if (char *ptr = write (1))
*ptr = v;
}
int
bytes_in::c ()
{
int v = 0;
if (const char *ptr = read (1))
v = (unsigned char)ptr[0];
return v;
}
/* Ints 7-bit as a byte. Otherwise a 3bit count of following bytes in
big-endian form. 4 bits are in the first byte. */
void
bytes_out::i (int v)
{
if (char *ptr = write (1))
{
if (v <= 0x3f && v >= -0x40)
*ptr = v & 0x7f;
else
{
unsigned bytes = 0;
int probe;
if (v >= 0)
for (probe = v >> 8; probe > 0x7; probe >>= 8)
bytes++;
else
for (probe = v >> 8; probe < -0x8; probe >>= 8)
bytes++;
*ptr = 0x80 | bytes << 4 | (probe & 0xf);
if ((ptr = write (++bytes)))
for (; bytes--; v >>= 8)
ptr[bytes] = v & 0xff;
}
}
}
int
bytes_in::i ()
{
int v = 0;
if (const char *ptr = read (1))
{
v = *ptr & 0xff;
if (v & 0x80)
{
unsigned bytes = (v >> 4) & 0x7;
v &= 0xf;
if (v & 0x8)
v |= -1 ^ 0x7;
/* unsigned necessary due to left shifts of -ve values. */
unsigned uv = unsigned (v);
if ((ptr = read (++bytes)))
while (bytes--)
uv = (uv << 8) | (*ptr++ & 0xff);
v = int (uv);
}
else if (v & 0x40)
v |= -1 ^ 0x3f;
}
return v;
}
void
bytes_out::u (unsigned v)
{
if (char *ptr = write (1))
{
if (v <= 0x7f)
*ptr = v;
else
{
unsigned bytes = 0;
unsigned probe;
for (probe = v >> 8; probe > 0xf; probe >>= 8)
bytes++;
*ptr = 0x80 | bytes << 4 | probe;
if ((ptr = write (++bytes)))
for (; bytes--; v >>= 8)
ptr[bytes] = v & 0xff;
}
}
}
unsigned
bytes_in::u ()
{
unsigned v = 0;
if (const char *ptr = read (1))
{
v = *ptr & 0xff;
if (v & 0x80)
{
unsigned bytes = (v >> 4) & 0x7;
v &= 0xf;
if ((ptr = read (++bytes)))
while (bytes--)
v = (v << 8) | (*ptr++ & 0xff);
}
}
return v;
}
void
bytes_out::wi (HOST_WIDE_INT v)
{
if (char *ptr = write (1))
{
if (v <= 0x3f && v >= -0x40)
*ptr = v & 0x7f;
else
{
unsigned bytes = 0;
HOST_WIDE_INT probe;
if (v >= 0)
for (probe = v >> 8; probe > 0x7; probe >>= 8)
bytes++;
else
for (probe = v >> 8; probe < -0x8; probe >>= 8)
bytes++;
*ptr = 0x80 | bytes << 4 | (probe & 0xf);
if ((ptr = write (++bytes)))
for (; bytes--; v >>= 8)
ptr[bytes] = v & 0xff;
}
}
}
HOST_WIDE_INT
bytes_in::wi ()
{
HOST_WIDE_INT v = 0;
if (const char *ptr = read (1))
{
v = *ptr & 0xff;
if (v & 0x80)
{
unsigned bytes = (v >> 4) & 0x7;
v &= 0xf;
if (v & 0x8)
v |= -1 ^ 0x7;
/* unsigned necessary due to left shifts of -ve values. */
unsigned HOST_WIDE_INT uv = (unsigned HOST_WIDE_INT) v;
if ((ptr = read (++bytes)))
while (bytes--)
uv = (uv << 8) | (*ptr++ & 0xff);
v = (HOST_WIDE_INT) uv;
}
else if (v & 0x40)
v |= -1 ^ 0x3f;
}
return v;
}
/* unsigned wide ints are just written as signed wide ints. */
inline void
bytes_out::wu (unsigned HOST_WIDE_INT v)
{
wi ((HOST_WIDE_INT) v);
}
inline unsigned HOST_WIDE_INT
bytes_in::wu ()
{
return (unsigned HOST_WIDE_INT) wi ();
}
/* size_t written as unsigned or unsigned wide int. */
inline void
bytes_out::z (size_t s)
{
if (sizeof (s) == sizeof (unsigned))
u (s);
else
wu (s);
}
inline size_t
bytes_in::z ()
{
if (sizeof (size_t) == sizeof (unsigned))
return u ();
else
return wu ();
}
/* Buffer simply memcpied. */
void *
bytes_out::buf (size_t len)
{
align (sizeof (void *) * 2);
return write (len);
}
void
bytes_out::buf (const void *src, size_t len)
{
if (void *ptr = buf (len))
memcpy (ptr, src, len);
}
const void *
bytes_in::buf (size_t len)
{
align (sizeof (void *) * 2);
const char *ptr = read (len);
return ptr;
}
/* strings as an size_t length, followed by the buffer. Make sure
there's a NUL terminator on read. */
void
bytes_out::str (const char *string, size_t len)
{
z (len);
if (len)
{
gcc_checking_assert (!string[len]);
buf (string, len + 1);
}
}
const char *
bytes_in::str (size_t *len_p)
{
size_t len = z ();
/* We're about to trust some user data. */
if (overrun)
len = 0;
if (len_p)
*len_p = len;
const char *str = NULL;
if (len)
{
str = reinterpret_cast<const char *> (buf (len + 1));
if (!str || str[len])
{
set_overrun ();
str = NULL;
}
}
return str ? str : "";
}
cpp_hashnode *
bytes_in::cpp_node ()
{
size_t len;
const char *s = str (&len);
if (!len)
return NULL;
return ::cpp_node (get_identifier_with_length (s, len));
}
/* Format a string directly to the buffer, including a terminating
NUL. Intended for human consumption. */
void
bytes_out::printf (const char *format, ...)
{
va_list args;
/* Exercise buffer expansion. */
size_t len = EXPERIMENT (10, 500);
while (char *ptr = write (len))
{
va_start (args, format);
size_t actual = vsnprintf (ptr, len, format, args) + 1;
va_end (args);
if (actual <= len)
{
unuse (len - actual);
break;
}
unuse (len);
len = actual;
}
}
void
bytes_out::print_time (const char *kind, const tm *time, const char *tz)
{
printf ("%stime: %4u/%02u/%02u %02u:%02u:%02u %s",
kind, time->tm_year + 1900, time->tm_mon + 1, time->tm_mday,
time->tm_hour, time->tm_min, time->tm_sec, tz);
}
/* Encapsulated Lazy Records Of Named Declarations.
Header: Stunningly Elf32_Ehdr-like
Sections: Sectional data
[1-N) : User data sections
N .strtab : strings, stunningly ELF STRTAB-like
Index: Section table, stunningly ELF32_Shdr-like. */
class elf {
protected:
/* Constants used within the format. */
enum private_constants {
/* File kind. */
ET_NONE = 0,
EM_NONE = 0,
OSABI_NONE = 0,
/* File format. */
EV_CURRENT = 1,
CLASS32 = 1,
DATA2LSB = 1,
DATA2MSB = 2,
/* Section numbering. */
SHN_UNDEF = 0,
SHN_LORESERVE = 0xff00,
SHN_XINDEX = 0xffff,
/* Section types. */
SHT_NONE = 0, /* No contents. */
SHT_PROGBITS = 1, /* Random bytes. */
SHT_STRTAB = 3, /* A string table. */
/* Section flags. */
SHF_NONE = 0x00, /* Nothing. */
SHF_STRINGS = 0x20, /* NUL-Terminated strings. */
/* I really hope we do not get CMI files larger than 4GB. */
MY_CLASS = CLASS32,
/* It is host endianness that is relevant. */
MY_ENDIAN = DATA2LSB
#ifdef WORDS_BIGENDIAN
^ DATA2LSB ^ DATA2MSB
#endif
};
public:
/* Constants visible to users. */
enum public_constants {
/* Special error codes. Breaking layering a bit. */
E_BAD_DATA = -1, /* Random unexpected data errors. */
E_BAD_LAZY = -2, /* Badly ordered laziness. */
E_BAD_IMPORT = -3 /* A nested import failed. */
};
protected:
/* File identification. On-disk representation. */
struct ident {
uint8_t magic[4]; /* 0x7f, 'E', 'L', 'F' */
uint8_t klass; /* 4:CLASS32 */
uint8_t data; /* 5:DATA2[LM]SB */
uint8_t version; /* 6:EV_CURRENT */
uint8_t osabi; /* 7:OSABI_NONE */
uint8_t abiver; /* 8: 0 */
uint8_t pad[7]; /* 9-15 */
};
/* File header. On-disk representation. */
struct header {
struct ident ident;
uint16_t type; /* ET_NONE */
uint16_t machine; /* EM_NONE */
uint32_t version; /* EV_CURRENT */
uint32_t entry; /* 0 */
uint32_t phoff; /* 0 */
uint32_t shoff; /* Section Header Offset in file */
uint32_t flags;
uint16_t ehsize; /* ELROND Header SIZE -- sizeof (header) */
uint16_t phentsize; /* 0 */
uint16_t phnum; /* 0 */
uint16_t shentsize; /* Section Header SIZE -- sizeof (section) */
uint16_t shnum; /* Section Header NUM */
uint16_t shstrndx; /* Section Header STRing iNDeX */
};
/* File section. On-disk representation. */
struct section {
uint32_t name; /* String table offset. */
uint32_t type; /* SHT_* */
uint32_t flags; /* SHF_* */
uint32_t addr; /* 0 */
uint32_t offset; /* OFFSET in file */
uint32_t size; /* SIZE of section */
uint32_t link; /* 0 */
uint32_t info; /* 0 */
uint32_t addralign; /* 0 */
uint32_t entsize; /* ENTry SIZE, usually 0 */
};
protected:
data hdr; /* The header. */
data sectab; /* The section table. */
data strtab; /* String table. */
int fd; /* File descriptor we're reading or writing. */
int err; /* Sticky error code. */
public:
/* Construct from STREAM. E is errno if STREAM NULL. */
elf (int fd, int e)
:hdr (), sectab (), strtab (), fd (fd), err (fd >= 0 ? 0 : e)
{}
~elf ()
{
gcc_checking_assert (fd < 0 && !hdr.buffer
&& !sectab.buffer && !strtab.buffer);
}
public:
/* Return the error, if we have an error. */
int get_error () const
{
return err;
}
/* Set the error, unless it's already been set. */
void set_error (int e = E_BAD_DATA)
{
if (!err)
err = e;
}
/* Get an error string. */
const char *get_error (const char *) const;
public:
/* Begin reading/writing file. Return false on error. */
bool begin () const
{
return !get_error ();
}
/* Finish reading/writing file. Return false on error. */
bool end ();
};
/* Return error string. */
const char *
elf::get_error (const char *name) const
{
if (!name)
return "Unknown CMI mapping";
switch (err)
{
case 0:
gcc_unreachable ();
case E_BAD_DATA:
return "Bad file data";
case E_BAD_IMPORT:
return "Bad import dependency";
case E_BAD_LAZY:
return "Bad lazy ordering";
default:
return xstrerror (err);
}
}
/* Finish file, return true if there's an error. */
bool
elf::end ()
{
/* Close the stream and free the section table. */
if (fd >= 0 && close (fd))
set_error (errno);
fd = -1;
return !get_error ();
}
/* ELROND reader. */
class elf_in : public elf {
typedef elf parent;
private:
/* For freezing & defrosting. */
#if !defined (HOST_LACKS_INODE_NUMBERS)
dev_t device;
ino_t inode;
#endif
public:
elf_in (int fd, int e)
:parent (fd, e)
{
}
~elf_in ()
{
}
public:
bool is_frozen () const
{
return fd < 0 && hdr.pos;
}
bool is_freezable () const
{
return fd >= 0 && hdr.pos;
}
void freeze ();
bool defrost (const char *);
/* If BYTES is in the mmapped area, allocate a new buffer for it. */
void preserve (bytes_in &bytes ATTRIBUTE_UNUSED)
{
#if MAPPED_READING
if (hdr.buffer && bytes.buffer >= hdr.buffer
&& bytes.buffer < hdr.buffer + hdr.pos)
{
char *buf = bytes.buffer;
bytes.buffer = data::simple_memory.grow (NULL, bytes.size);
memcpy (bytes.buffer, buf, bytes.size);
}
#endif
}
/* If BYTES is not in SELF's mmapped area, free it. SELF might be
NULL. */
static void release (elf_in *self ATTRIBUTE_UNUSED, bytes_in &bytes)
{
#if MAPPED_READING
if (!(self && self->hdr.buffer && bytes.buffer >= self->hdr.buffer
&& bytes.buffer < self->hdr.buffer + self->hdr.pos))
#endif
data::simple_memory.shrink (bytes.buffer);
bytes.buffer = NULL;
bytes.size = 0;
}
public:
static void grow (data &data, unsigned needed)
{
gcc_checking_assert (!data.buffer);
#if !MAPPED_READING
data.buffer = XNEWVEC (char, needed);
#endif
data.size = needed;
}
static void shrink (data &data)
{
#if !MAPPED_READING
XDELETEVEC (data.buffer);
#endif
data.buffer = NULL;
data.size = 0;
}
public:
const section *get_section (unsigned s) const
{
if (s * sizeof (section) < sectab.size)
return reinterpret_cast<const section *>
(&sectab.buffer[s * sizeof (section)]);
else
return NULL;
}
unsigned get_section_limit () const
{
return sectab.size / sizeof (section);
}
protected:
const char *read (data *, unsigned, unsigned);
public:
/* Read section by number. */
bool read (data *d, const section *s)
{
return s && read (d, s->offset, s->size);
}
/* Find section by name. */
unsigned find (const char *name);
/* Find section by index. */
const section *find (unsigned snum, unsigned type = SHT_PROGBITS);
public:
/* Release the string table, when we're done with it. */
void release ()
{
shrink (strtab);
}
public:
bool begin (location_t);
bool end ()
{
release ();
#if MAPPED_READING
if (hdr.buffer)
munmap (hdr.buffer, hdr.pos);
hdr.buffer = NULL;
#endif
shrink (sectab);
return parent::end ();
}
public:
/* Return string name at OFFSET. Checks OFFSET range. Always
returns non-NULL. We know offset 0 is an empty string. */
const char *name (unsigned offset)
{
return &strtab.buffer[offset < strtab.size ? offset : 0];
}
};
/* ELROND writer. */
class elf_out : public elf, public data::allocator {
typedef elf parent;
/* Desired section alignment on disk. */
static const int SECTION_ALIGN = 16;
private:
ptr_int_hash_map identtab; /* Map of IDENTIFIERS to strtab offsets. */
unsigned pos; /* Write position in file. */
#if MAPPED_WRITING
unsigned offset; /* Offset of the mapping. */
unsigned extent; /* Length of mapping. */
unsigned page_size; /* System page size. */
#endif
public:
elf_out (int fd, int e)
:parent (fd, e), identtab (500), pos (0)
{
#if MAPPED_WRITING
offset = extent = 0;
page_size = sysconf (_SC_PAGE_SIZE);
if (page_size < SECTION_ALIGN)
/* Something really strange. */
set_error (EINVAL);
#endif
}
~elf_out ()
{
data::simple_memory.shrink (hdr);
data::simple_memory.shrink (sectab);
data::simple_memory.shrink (strtab);
}
#if MAPPED_WRITING
private:
void create_mapping (unsigned ext, bool extending = true);
void remove_mapping ();
#endif
protected:
using allocator::grow;
virtual char *grow (char *, unsigned needed);
#if MAPPED_WRITING
using allocator::shrink;
virtual void shrink (char *);
#endif
public:
unsigned get_section_limit () const
{
return sectab.pos / sizeof (section);
}
protected:
unsigned add (unsigned type, unsigned name = 0,
unsigned off = 0, unsigned size = 0, unsigned flags = SHF_NONE);
unsigned write (const data &);
#if MAPPED_WRITING
unsigned write (const bytes_out &);
#endif
public:
/* IDENTIFIER to strtab offset. */
unsigned name (tree ident);
/* String literal to strtab offset. */
unsigned name (const char *n);
/* Qualified name of DECL to strtab offset. */
unsigned qualified_name (tree decl, bool is_defn);
private:
unsigned strtab_write (const char *s, unsigned l);
void strtab_write (tree decl, int);
public:
/* Add a section with contents or strings. */
unsigned add (const bytes_out &, bool string_p, unsigned name);
public:
/* Begin and end writing. */
bool begin ();
bool end ();
};
/* Begin reading section NAME (of type PROGBITS) from SOURCE.
Data always checked for CRC. */
bool
bytes_in::begin (location_t loc, elf_in *source, const char *name)
{
unsigned snum = source->find (name);
return begin (loc, source, snum, name);
}
/* Begin reading section numbered SNUM with NAME (may be NULL). */
bool
bytes_in::begin (location_t loc, elf_in *source, unsigned snum, const char *name)
{
if (!source->read (this, source->find (snum))
|| !size || !check_crc ())
{
source->set_error (elf::E_BAD_DATA);
source->shrink (*this);
if (name)
error_at (loc, "section %qs is missing or corrupted", name);
else
error_at (loc, "section #%u is missing or corrupted", snum);
return false;
}
pos = 4;
return true;
}
/* Finish reading a section. */
bool
bytes_in::end (elf_in *src)
{
if (more_p ())
set_overrun ();
if (overrun)
src->set_error ();
src->shrink (*this);
return !overrun;
}
/* Begin writing buffer. */
void
bytes_out::begin (bool need_crc)
{
if (need_crc)
pos = 4;
memory->grow (*this, 0, false);
}
/* Finish writing buffer. Stream out to SINK as named section NAME.
Return section number or 0 on failure. If CRC_PTR is true, crc
the data. Otherwise it is a string section. */
unsigned
bytes_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
{
lengths[3] += pos;
spans[3]++;
set_crc (crc_ptr);
unsigned sec_num = sink->add (*this, !crc_ptr, name);
memory->shrink (*this);
return sec_num;
}
/* Close and open the file, without destroying it. */
void
elf_in::freeze ()
{
gcc_checking_assert (!is_frozen ());
#if MAPPED_READING
if (munmap (hdr.buffer, hdr.pos) < 0)
set_error (errno);
#endif
if (close (fd) < 0)
set_error (errno);
fd = -1;
}
bool
elf_in::defrost (const char *name)
{
gcc_checking_assert (is_frozen ());
struct stat stat;
fd = open (name, O_RDONLY | O_CLOEXEC | O_BINARY);
if (fd < 0 || fstat (fd, &stat) < 0)
set_error (errno);
else
{
bool ok = hdr.pos == unsigned (stat.st_size);
#ifndef HOST_LACKS_INODE_NUMBERS
if (device != stat.st_dev
|| inode != stat.st_ino)
ok = false;
#endif
if (!ok)
set_error (EMFILE);
#if MAPPED_READING
if (ok)
{
char *mapping = reinterpret_cast<char *>
(mmap (NULL, hdr.pos, PROT_READ, MAP_SHARED, fd, 0));
if (mapping == MAP_FAILED)
fail:
set_error (errno);
else
{
if (madvise (mapping, hdr.pos, MADV_RANDOM))
goto fail;
/* These buffers are never NULL in this case. */
strtab.buffer = mapping + strtab.pos;
sectab.buffer = mapping + sectab.pos;
hdr.buffer = mapping;
}
}
#endif
}
return !get_error ();
}
/* Read at current position into BUFFER. Return true on success. */
const char *
elf_in::read (data *data, unsigned pos, unsigned length)
{
#if MAPPED_READING
if (pos + length > hdr.pos)
{
set_error (EINVAL);
return NULL;
}
#else
if (pos != ~0u && lseek (fd, pos, SEEK_SET) < 0)
{
set_error (errno);
return NULL;
}
#endif
grow (*data, length);
#if MAPPED_READING
data->buffer = hdr.buffer + pos;
#else
if (::read (fd, data->buffer, data->size) != ssize_t (length))
{
set_error (errno);
shrink (*data);
return NULL;
}
#endif
return data->buffer;
}
/* Read section SNUM of TYPE. Return section pointer or NULL on error. */
const elf::section *
elf_in::find (unsigned snum, unsigned type)
{
const section *sec = get_section (snum);
if (!snum || !sec || sec->type != type)
return NULL;
return sec;
}
/* Find a section NAME and TYPE. Return section number, or zero on
failure. */
unsigned
elf_in::find (const char *sname)
{
for (unsigned pos = sectab.size; pos -= sizeof (section); )
{
const section *sec
= reinterpret_cast<const section *> (&sectab.buffer[pos]);
if (0 == strcmp (sname, name (sec->name)))
return pos / sizeof (section);
}
return 0;
}
/* Begin reading file. Verify header. Pull in section and string
tables. Return true on success. */
bool
elf_in::begin (location_t loc)
{
if (!parent::begin ())
return false;
struct stat stat;
unsigned size = 0;
if (!fstat (fd, &stat))
{
#if !defined (HOST_LACKS_INODE_NUMBERS)
device = stat.st_dev;
inode = stat.st_ino;
#endif
/* Never generate files > 4GB, check we've not been given one. */
if (stat.st_size == unsigned (stat.st_size))
size = unsigned (stat.st_size);
}
#if MAPPED_READING
/* MAP_SHARED so that the file is backing store. If someone else
concurrently writes it, they're wrong. */
void *mapping = mmap (NULL, size, PROT_READ, MAP_SHARED, fd, 0);
if (mapping == MAP_FAILED)
{
fail:
set_error (errno);
return false;
}
/* We'll be hopping over this randomly. Some systems declare the
first parm as char *, and other declare it as void *. */
if (madvise (reinterpret_cast <char *> (mapping), size, MADV_RANDOM))
goto fail;
hdr.buffer = (char *)mapping;
#else
read (&hdr, 0, sizeof (header));
#endif
hdr.pos = size; /* Record size of the file. */
const header *h = reinterpret_cast<const header *> (hdr.buffer);
if (!h)
return false;
if (h->ident.magic[0] != 0x7f
|| h->ident.magic[1] != 'E'
|| h->ident.magic[2] != 'L'
|| h->ident.magic[3] != 'F')
{
error_at (loc, "not Encapsulated Lazy Records of Named Declarations");
failed:
shrink (hdr);
return false;
}
/* We expect a particular format -- the ELF is not intended to be
distributable. */
if (h->ident.klass != MY_CLASS
|| h->ident.data != MY_ENDIAN
|| h->ident.version != EV_CURRENT
|| h->type != ET_NONE
|| h->machine != EM_NONE
|| h->ident.osabi != OSABI_NONE)
{
error_at (loc, "unexpected encapsulation format or type");
goto failed;
}
int e = -1;
if (!h->shoff || h->shentsize != sizeof (section))
{
malformed:
set_error (e);
error_at (loc, "encapsulation is malformed");
goto failed;
}
unsigned strndx = h->shstrndx;
unsigned shnum = h->shnum;
if (shnum == SHN_XINDEX)
{
if (!read (&sectab, h->shoff, sizeof (section)))
{
section_table_fail:
e = errno;
goto malformed;
}
shnum = get_section (0)->size;
/* Freeing does mean we'll re-read it in the case we're not
mapping, but this is going to be rare. */
shrink (sectab);
}
if (!shnum)
goto malformed;
if (!read (&sectab, h->shoff, shnum * sizeof (section)))
goto section_table_fail;
if (strndx == SHN_XINDEX)
strndx = get_section (0)->link;
if (!read (&strtab, find (strndx, SHT_STRTAB)))
goto malformed;
/* The string table should be at least one byte, with NUL chars
at either end. */
if (!(strtab.size && !strtab.buffer[0]
&& !strtab.buffer[strtab.size - 1]))
goto malformed;
#if MAPPED_READING
/* Record the offsets of the section and string tables. */
sectab.pos = h->shoff;
strtab.pos = shnum * sizeof (section);
#else
shrink (hdr);
#endif
return true;
}
/* Create a new mapping. */
#if MAPPED_WRITING
void
elf_out::create_mapping (unsigned ext, bool extending)
{
#ifndef HAVE_POSIX_FALLOCATE
#define posix_fallocate(fd,off,len) ftruncate (fd, off + len)
#endif
void *mapping = MAP_FAILED;
if (extending && ext < 1024 * 1024)
{
if (!posix_fallocate (fd, offset, ext * 2))
mapping = mmap (NULL, ext * 2, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, offset);
if (mapping != MAP_FAILED)
ext *= 2;
}
if (mapping == MAP_FAILED)
{
if (!extending || !posix_fallocate (fd, offset, ext))
mapping = mmap (NULL, ext, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, offset);
if (mapping == MAP_FAILED)
{
set_error (errno);
mapping = NULL;
ext = 0;
}
}
#undef posix_fallocate
hdr.buffer = (char *)mapping;
extent = ext;
}
#endif
/* Flush out the current mapping. */
#if MAPPED_WRITING
void
elf_out::remove_mapping ()
{
if (hdr.buffer)
{
/* MS_ASYNC dtrt with the removed mapping, including a
subsequent overlapping remap. */
if (msync (hdr.buffer, extent, MS_ASYNC)
|| munmap (hdr.buffer, extent))
/* We're somewhat screwed at this point. */
set_error (errno);
}
hdr.buffer = NULL;
}
#endif
/* Grow a mapping of PTR to be NEEDED bytes long. This gets
interesting if the new size grows the EXTENT. */
char *
elf_out::grow (char *data, unsigned needed)
{
if (!data)
{
/* First allocation, check we're aligned. */
gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
#if MAPPED_WRITING
data = hdr.buffer + (pos - offset);
#endif
}
#if MAPPED_WRITING
unsigned off = data - hdr.buffer;
if (off + needed > extent)
{
/* We need to grow the mapping. */
unsigned lwm = off & ~(page_size - 1);
unsigned hwm = (off + needed + page_size - 1) & ~(page_size - 1);
gcc_checking_assert (hwm > extent);
remove_mapping ();
offset += lwm;
create_mapping (extent < hwm - lwm ? hwm - lwm : extent);
data = hdr.buffer + (off - lwm);
}
#else
data = allocator::grow (data, needed);
#endif
return data;
}
#if MAPPED_WRITING
/* Shrinking is a NOP. */
void
elf_out::shrink (char *)
{
}
#endif
/* Write S of length L to the strtab buffer. L must include the ending
NUL, if that's what you want. */
unsigned
elf_out::strtab_write (const char *s, unsigned l)
{
if (strtab.pos + l > strtab.size)
data::simple_memory.grow (strtab, strtab.pos + l, false);
memcpy (strtab.buffer + strtab.pos, s, l);
unsigned res = strtab.pos;
strtab.pos += l;
return res;
}
/* Write qualified name of decl. INNER >0 if this is a definition, <0
if this is a qualifier of an outer name. */
void
elf_out::strtab_write (tree decl, int inner)
{
tree ctx = CP_DECL_CONTEXT (decl);
if (TYPE_P (ctx))
ctx = TYPE_NAME (ctx);
if (ctx != global_namespace)
strtab_write (ctx, -1);
tree name = DECL_NAME (decl);
if (!name)
name = DECL_ASSEMBLER_NAME_RAW (decl);
strtab_write (IDENTIFIER_POINTER (name), IDENTIFIER_LENGTH (name));
if (inner)
strtab_write (&"::{}"[inner+1], 2);
}
/* Map IDENTIFIER IDENT to strtab offset. Inserts into strtab if not
already there. */
unsigned
elf_out::name (tree ident)
{
unsigned res = 0;
if (ident)
{
bool existed;
int *slot = &identtab.get_or_insert (ident, &existed);
if (!existed)
*slot = strtab_write (IDENTIFIER_POINTER (ident),
IDENTIFIER_LENGTH (ident) + 1);
res = *slot;
}
return res;
}
/* Map LITERAL to strtab offset. Does not detect duplicates and
expects LITERAL to remain live until strtab is written out. */
unsigned
elf_out::name (const char *literal)
{
return strtab_write (literal, strlen (literal) + 1);
}
/* Map a DECL's qualified name to strtab offset. Does not detect
duplicates. */
unsigned
elf_out::qualified_name (tree decl, bool is_defn)
{
gcc_checking_assert (DECL_P (decl) && decl != global_namespace);
unsigned result = strtab.pos;
strtab_write (decl, is_defn);
strtab_write ("", 1);
return result;
}
/* Add section to file. Return section number. TYPE & NAME identify
the section. OFF and SIZE identify the file location of its
data. FLAGS contains additional info. */
unsigned
elf_out::add (unsigned type, unsigned name, unsigned off, unsigned size,
unsigned flags)
{
gcc_checking_assert (!(off & (SECTION_ALIGN - 1)));
if (sectab.pos + sizeof (section) > sectab.size)
data::simple_memory.grow (sectab, sectab.pos + sizeof (section), false);
section *sec = reinterpret_cast<section *> (sectab.buffer + sectab.pos);
memset (sec, 0, sizeof (section));
sec->type = type;
sec->flags = flags;
sec->name = name;
sec->offset = off;
sec->size = size;
if (flags & SHF_STRINGS)
sec->entsize = 1;
unsigned res = sectab.pos;
sectab.pos += sizeof (section);
return res / sizeof (section);
}
/* Pad to the next alignment boundary, then write BUFFER to disk.
Return the position of the start of the write, or zero on failure. */
unsigned
elf_out::write (const data &buffer)
{
#if MAPPED_WRITING
/* HDR is always mapped. */
if (&buffer != &hdr)
{
bytes_out out (this);
grow (out, buffer.pos, true);
if (out.buffer)
memcpy (out.buffer, buffer.buffer, buffer.pos);
shrink (out);
}
else
/* We should have been aligned during the first allocation. */
gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
#else
if (::write (fd, buffer.buffer, buffer.pos) != ssize_t (buffer.pos))
{
set_error (errno);
return 0;
}
#endif
unsigned res = pos;
pos += buffer.pos;
if (unsigned padding = -pos & (SECTION_ALIGN - 1))
{
#if !MAPPED_WRITING
/* Align the section on disk, should help the necessary copies.
fseeking to extend is non-portable. */
static char zero[SECTION_ALIGN];
if (::write (fd, &zero, padding) != ssize_t (padding))
set_error (errno);
#endif
pos += padding;
}
return res;
}
/* Write a streaming buffer. It must be using us as an allocator. */
#if MAPPED_WRITING
unsigned
elf_out::write (const bytes_out &buf)
{
gcc_checking_assert (buf.memory == this);
/* A directly mapped buffer. */
gcc_checking_assert (buf.buffer - hdr.buffer >= 0
&& buf.buffer - hdr.buffer + buf.size <= extent);
unsigned res = pos;
pos += buf.pos;
/* Align up. We're not going to advance into the next page. */
pos += -pos & (SECTION_ALIGN - 1);
return res;
}
#endif
/* Write data and add section. STRING_P is true for a string
section, false for PROGBITS. NAME identifies the section (0 is the
empty name). DATA is the contents. Return section number or 0 on
failure (0 is the undef section). */
unsigned
elf_out::add (const bytes_out &data, bool string_p, unsigned name)
{
unsigned off = write (data);
return add (string_p ? SHT_STRTAB : SHT_PROGBITS, name,
off, data.pos, string_p ? SHF_STRINGS : SHF_NONE);
}
/* Begin writing the file. Initialize the section table and write an
empty header. Return false on failure. */
bool
elf_out::begin ()
{
if (!parent::begin ())
return false;
/* Let the allocators pick a default. */
data::simple_memory.grow (strtab, 0, false);
data::simple_memory.grow (sectab, 0, false);
/* The string table starts with an empty string. */
name ("");
/* Create the UNDEF section. */
add (SHT_NONE);
#if MAPPED_WRITING
/* Start a mapping. */
create_mapping (EXPERIMENT (page_size,
(32767 + page_size) & ~(page_size - 1)));
if (!hdr.buffer)
return false;
#endif
/* Write an empty header. */
grow (hdr, sizeof (header), true);
header *h = reinterpret_cast<header *> (hdr.buffer);
memset (h, 0, sizeof (header));
hdr.pos = hdr.size;
write (hdr);
return !get_error ();
}
/* Finish writing the file. Write out the string & section tables.
Fill in the header. Return true on error. */
bool
elf_out::end ()
{
if (fd >= 0)
{
/* Write the string table. */
unsigned strnam = name (".strtab");
unsigned stroff = write (strtab);
unsigned strndx = add (SHT_STRTAB, strnam, stroff, strtab.pos,
SHF_STRINGS);
/* Store escape values in section[0]. */
if (strndx >= SHN_LORESERVE)
{
reinterpret_cast<section *> (sectab.buffer)->link = strndx;
strndx = SHN_XINDEX;
}
unsigned shnum = sectab.pos / sizeof (section);
if (shnum >= SHN_LORESERVE)
{
reinterpret_cast<section *> (sectab.buffer)->size = shnum;
shnum = SHN_XINDEX;
}
unsigned shoff = write (sectab);
#if MAPPED_WRITING
if (offset)
{
remove_mapping ();
offset = 0;
create_mapping ((sizeof (header) + page_size - 1) & ~(page_size - 1),
false);
}
unsigned length = pos;
#else
if (lseek (fd, 0, SEEK_SET) < 0)
set_error (errno);
#endif
/* Write header. */
if (!get_error ())
{
/* Write the correct header now. */
header *h = reinterpret_cast<header *> (hdr.buffer);
h->ident.magic[0] = 0x7f;
h->ident.magic[1] = 'E'; /* Elrond */
h->ident.magic[2] = 'L'; /* is an */
h->ident.magic[3] = 'F'; /* elf. */
h->ident.klass = MY_CLASS;
h->ident.data = MY_ENDIAN;
h->ident.version = EV_CURRENT;
h->ident.osabi = OSABI_NONE;
h->type = ET_NONE;
h->machine = EM_NONE;
h->version = EV_CURRENT;
h->shoff = shoff;
h->ehsize = sizeof (header);
h->shentsize = sizeof (section);
h->shnum = shnum;
h->shstrndx = strndx;
pos = 0;
write (hdr);
}
#if MAPPED_WRITING
remove_mapping ();
if (ftruncate (fd, length))
set_error (errno);
#endif
}
data::simple_memory.shrink (sectab);
data::simple_memory.shrink (strtab);
return parent::end ();
}
/********************************************************************/
/* A dependency set. This is used during stream out to determine the
connectivity of the graph. Every namespace-scope declaration that
needs writing has a depset. The depset is filled with the (depsets
of) declarations within this module that it references. For a
declaration that'll generally be named types. For definitions
it'll also be declarations in the body.
From that we can convert the graph to a DAG, via determining the
Strongly Connected Clusters. Each cluster is streamed
independently, and thus we achieve lazy loading.
Other decls that get a depset are namespaces themselves and
unnameable declarations. */
class depset {
private:
tree entity; /* Entity, or containing namespace. */
uintptr_t discriminator; /* Flags or identifier. */
public:
/* The kinds of entity the depset could describe. The ordering is
significant, see entity_kind_name. */
enum entity_kind
{
EK_DECL, /* A decl. */
EK_SPECIALIZATION, /* A specialization. */
EK_PARTIAL, /* A partial specialization. */
EK_USING, /* A using declaration (at namespace scope). */
EK_NAMESPACE, /* A namespace. */
EK_REDIRECT, /* Redirect to a template_decl. */
EK_EXPLICIT_HWM,
EK_BINDING = EK_EXPLICIT_HWM, /* Implicitly encoded. */
EK_FOR_BINDING, /* A decl being inserted for a binding. */
EK_INNER_DECL, /* A decl defined outside of it's imported
context. */
EK_DIRECT_HWM = EK_PARTIAL + 1,
EK_BITS = 3 /* Only need to encode below EK_EXPLICIT_HWM. */
};
private:
/* Placement of bit fields in discriminator. */
enum disc_bits
{
DB_ZERO_BIT, /* Set to disambiguate identifier from flags */
DB_SPECIAL_BIT, /* First dep slot is special. */
DB_KIND_BIT, /* Kind of the entity. */
DB_KIND_BITS = EK_BITS,
DB_DEFN_BIT = DB_KIND_BIT + DB_KIND_BITS,
DB_IS_MEMBER_BIT, /* Is an out-of-class member. */
DB_IS_INTERNAL_BIT, /* It is an (erroneous)
internal-linkage entity. */
DB_REFS_INTERNAL_BIT, /* Refers to an internal-linkage
entity. */
DB_IMPORTED_BIT, /* An imported entity. */
DB_UNREACHED_BIT, /* A yet-to-be reached entity. */
DB_HIDDEN_BIT, /* A hidden binding. */
/* The following bits are not independent, but enumerating them is
awkward. */
DB_ALIAS_TMPL_INST_BIT, /* An alias template instantiation. */
DB_ALIAS_SPEC_BIT, /* Specialization of an alias template
(in both spec tables). */
DB_TYPE_SPEC_BIT, /* Specialization in the type table.
*/
DB_FRIEND_SPEC_BIT, /* An instantiated template friend. */
};
public:
/* The first slot is special for EK_SPECIALIZATIONS it is a
spec_entry pointer. It is not relevant for the SCC
determination. */
vec<depset *> deps; /* Depsets we reference. */
public:
unsigned cluster; /* Strongly connected cluster, later entity number */
unsigned section; /* Section written to. */
/* During SCC construction, section is lowlink, until the depset is
removed from the stack. See Tarjan algorithm for details. */
private:
/* Construction via factories. Destruction via hash traits. */
depset (tree entity);
~depset ();
public:
static depset *make_binding (tree, tree);
static depset *make_entity (tree, entity_kind, bool = false);
/* Late setting a binding name -- /then/ insert into hash! */
inline void set_binding_name (tree name)
{
gcc_checking_assert (!get_name ());
discriminator = reinterpret_cast<uintptr_t> (name);
}
private:
template<unsigned I> void set_flag_bit ()
{
gcc_checking_assert (I < 2 || !is_binding ());
discriminator |= 1u << I;
}
template<unsigned I> void clear_flag_bit ()
{
gcc_checking_assert (I < 2 || !is_binding ());
discriminator &= ~(1u << I);
}
template<unsigned I> bool get_flag_bit () const
{
gcc_checking_assert (I < 2 || !is_binding ());
return bool ((discriminator >> I) & 1);
}
public:
bool is_binding () const
{
return !get_flag_bit<DB_ZERO_BIT> ();
}
entity_kind get_entity_kind () const
{
if (is_binding ())
return EK_BINDING;
return entity_kind ((discriminator >> DB_KIND_BIT) & ((1u << EK_BITS) - 1));
}
const char *entity_kind_name () const;
public:
bool has_defn () const
{
return get_flag_bit<DB_DEFN_BIT> ();
}
public:
/* This class-member is defined here, but the class was imported. */
bool is_member () const
{
gcc_checking_assert (get_entity_kind () == EK_DECL);
return get_flag_bit<DB_IS_MEMBER_BIT> ();
}
public:
bool is_internal () const
{
return get_flag_bit<DB_IS_INTERNAL_BIT> ();
}
bool refs_internal () const
{
return get_flag_bit<DB_REFS_INTERNAL_BIT> ();
}
bool is_import () const
{
return get_flag_bit<DB_IMPORTED_BIT> ();
}
bool is_unreached () const
{
return get_flag_bit<DB_UNREACHED_BIT> ();
}
bool is_alias_tmpl_inst () const
{
return get_flag_bit<DB_ALIAS_TMPL_INST_BIT> ();
}
bool is_alias () const
{
return get_flag_bit<DB_ALIAS_SPEC_BIT> ();
}
bool is_hidden () const
{
return get_flag_bit<DB_HIDDEN_BIT> ();
}
bool is_type_spec () const
{
return get_flag_bit<DB_TYPE_SPEC_BIT> ();
}
bool is_friend_spec () const
{
return get_flag_bit<DB_FRIEND_SPEC_BIT> ();
}
public:
/* We set these bit outside of depset. */
void set_hidden_binding ()
{
set_flag_bit<DB_HIDDEN_BIT> ();
}
void clear_hidden_binding ()
{
clear_flag_bit<DB_HIDDEN_BIT> ();
}
public:
bool is_special () const
{
return get_flag_bit<DB_SPECIAL_BIT> ();
}
void set_special ()
{
set_flag_bit<DB_SPECIAL_BIT> ();
}
public:
tree get_entity () const
{
return entity;
}
tree get_name () const
{
gcc_checking_assert (is_binding ());
return reinterpret_cast <tree> (discriminator);
}
public:
/* Traits for a hash table of pointers to bindings. */
struct traits {
/* Each entry is a pointer to a depset. */
typedef depset *value_type;
/* We lookup by container:maybe-identifier pair. */
typedef std::pair<tree,tree> compare_type;
static const bool empty_zero_p = true;
/* hash and equality for compare_type. */
inline static hashval_t hash (const compare_type &p)
{
hashval_t h = pointer_hash<tree_node>::hash (p.first);
if (p.second)
{
hashval_t nh = IDENTIFIER_HASH_VALUE (p.second);
h = iterative_hash_hashval_t (h, nh);
}
return h;
}
inline static bool equal (const value_type b, const compare_type &p)
{
if (b->entity != p.first)
return false;
if (p.second)
return b->discriminator == reinterpret_cast<uintptr_t> (p.second);
else
return !b->is_binding ();
}
/* (re)hasher for a binding itself. */
inline static hashval_t hash (const value_type b)
{
hashval_t h = pointer_hash<tree_node>::hash (b->entity);
if (b->is_binding ())
{
hashval_t nh = IDENTIFIER_HASH_VALUE (b->get_name ());
h = iterative_hash_hashval_t (h, nh);
}
return h;
}
/* Empty via NULL. */
static inline void mark_empty (value_type &p) {p = NULL;}
static inline bool is_empty (value_type p) {return !p;}
/* Nothing is deletable. Everything is insertable. */
static bool is_deleted (value_type) { return false; }
static void mark_deleted (value_type) { gcc_unreachable (); }
/* We own the entities in the hash table. */
static void remove (value_type p)
{
delete (p);
}
};
public:
class hash : public hash_table<traits> {
typedef traits::compare_type key_t;
typedef hash_table<traits> parent;
public:
vec<depset *> worklist; /* Worklist of decls to walk. */
hash *chain; /* Original table. */
depset *current; /* Current depset being depended. */
unsigned section; /* When writing out, the section. */
bool sneakoscope; /* Detecting dark magic (of a voldemort). */
bool reached_unreached; /* We reached an unreached entity. */
public:
hash (size_t size, hash *c = NULL)
: parent (size), chain (c), current (NULL), section (0),
sneakoscope (false), reached_unreached (false)
{
worklist.create (size);
}
~hash ()
{
worklist.release ();
}
public:
bool is_key_order () const
{
return chain != NULL;
}
private:
depset **entity_slot (tree entity, bool = true);
depset **binding_slot (tree ctx, tree name, bool = true);
depset *maybe_add_declaration (tree decl);
public:
depset *find_dependency (tree entity);
depset *find_binding (tree ctx, tree name);
depset *make_dependency (tree decl, entity_kind);
void add_dependency (depset *);
public:
void add_mergeable (depset *);
depset *add_dependency (tree decl, entity_kind);
void add_namespace_context (depset *, tree ns);
private:
static bool add_binding_entity (tree, WMB_Flags, void *);
public:
bool add_namespace_entities (tree ns, bitmap partitions);
void add_specializations (bool decl_p);
void add_partial_entities (vec<tree, va_gc> *);
void add_class_entities (vec<tree, va_gc> *);
public:
void find_dependencies (module_state *);
bool finalize_dependencies ();
vec<depset *> connect ();
};
public:
struct tarjan {
vec<depset *> result;
vec<depset *> stack;
unsigned index;
tarjan (unsigned size)
: index (0)
{
result.create (size);
stack.create (50);
}
~tarjan ()
{
gcc_assert (!stack.length ());
stack.release ();
}
public:
void connect (depset *);
};
};
inline
depset::depset (tree entity)
:entity (entity), discriminator (0), cluster (0), section (0)
{
deps.create (0);
}
inline
depset::~depset ()
{
deps.release ();
}
const char *
depset::entity_kind_name () const
{
/* Same order as entity_kind. */
static const char *const names[] =
{"decl", "specialization", "partial", "using",
"namespace", "redirect", "binding"};
entity_kind kind = get_entity_kind ();
gcc_checking_assert (kind < sizeof (names) / sizeof(names[0]));
return names[kind];
}
/* Create a depset for a namespace binding NS::NAME. */
depset *depset::make_binding (tree ns, tree name)
{
depset *binding = new depset (ns);
binding->discriminator = reinterpret_cast <uintptr_t> (name);
return binding;
}
depset *depset::make_entity (tree entity, entity_kind ek, bool is_defn)
{
depset *r = new depset (entity);
r->discriminator = ((1 << DB_ZERO_BIT)
| (ek << DB_KIND_BIT)
| is_defn << DB_DEFN_BIT);
return r;
}
class pending_key
{
public:
tree ns;
tree id;
};
template<>
struct default_hash_traits<pending_key>
{
using value_type = pending_key;
static const bool empty_zero_p = false;
static hashval_t hash (const value_type &k)
{
hashval_t h = IDENTIFIER_HASH_VALUE (k.id);
h = iterative_hash_hashval_t (DECL_UID (k.ns), h);
return h;
}
static bool equal (const value_type &k, const value_type &l)
{
return k.ns == l.ns && k.id == l.id;
}
static void mark_empty (value_type &k)
{
k.ns = k.id = NULL_TREE;
}
static void mark_deleted (value_type &k)
{
k.ns = NULL_TREE;
gcc_checking_assert (k.id);
}
static bool is_empty (const value_type &k)
{
return k.ns == NULL_TREE && k.id == NULL_TREE;
}
static bool is_deleted (const value_type &k)
{
return k.ns == NULL_TREE && k.id != NULL_TREE;
}
static void remove (value_type &)
{
}
};
typedef hash_map<pending_key, auto_vec<unsigned>> pending_map_t;
/* Not-loaded entities that are keyed to a namespace-scope
identifier. See module_state::write_pendings for details. */
pending_map_t *pending_table;
/* Decls that need some post processing once a batch of lazy loads has
completed. */
vec<tree, va_heap, vl_embed> *post_load_decls;
/* Some entities are attached to another entitity for ODR purposes.
For example, at namespace scope, 'inline auto var = []{};', that
lambda is attached to 'var', and follows its ODRness. */
typedef hash_map<tree, auto_vec<tree>> attached_map_t;
static attached_map_t *attached_table;
/********************************************************************/
/* Tree streaming. The tree streaming is very specific to the tree
structures themselves. A tag indicates the kind of tree being
streamed. -ve tags indicate backreferences to already-streamed
trees. Backreferences are auto-numbered. */
/* Tree tags. */
enum tree_tag {
tt_null, /* NULL_TREE. */
tt_fixed, /* Fixed vector index. */
tt_node, /* By-value node. */
tt_decl, /* By-value mergeable decl. */
tt_tpl_parm, /* Template parm. */
/* The ordering of the following 4 is relied upon in
trees_out::tree_node. */
tt_id, /* Identifier node. */
tt_conv_id, /* Conversion operator name. */
tt_anon_id, /* Anonymous name. */
tt_lambda_id, /* Lambda name. */
tt_typedef_type, /* A (possibly implicit) typedefed type. */
tt_derived_type, /* A type derived from another type. */
tt_variant_type, /* A variant of another type. */
tt_tinfo_var, /* Typeinfo object. */
tt_tinfo_typedef, /* Typeinfo typedef. */
tt_ptrmem_type, /* Pointer to member type. */
tt_parm, /* Function parameter or result. */
tt_enum_value, /* An enum value. */
tt_enum_decl, /* An enum decl. */
tt_data_member, /* Data member/using-decl. */
tt_binfo, /* A BINFO. */
tt_vtable, /* A vtable. */
tt_thunk, /* A thunk. */
tt_clone_ref,
tt_entity, /* A extra-cluster entity. */
tt_template, /* The TEMPLATE_RESULT of a template. */
};
enum walk_kind {
WK_none, /* No walk to do (a back- or fixed-ref happened). */
WK_normal, /* Normal walk (by-name if possible). */
WK_value, /* By-value walk. */
};
enum merge_kind
{
MK_unique, /* Known unique. */
MK_named, /* Found by CTX, NAME + maybe_arg types etc. */
MK_field, /* Found by CTX and index on TYPE_FIELDS */
MK_vtable, /* Found by CTX and index on TYPE_VTABLES */
MK_as_base, /* Found by CTX. */
MK_partial,
MK_enum, /* Found by CTX, & 1stMemberNAME. */
MK_attached, /* Found by attachee & index. */
MK_friend_spec, /* Like named, but has a tmpl & args too. */
MK_local_friend, /* Found by CTX, index. */
MK_indirect_lwm = MK_enum,
/* Template specialization kinds below. These are all found via
primary template and specialization args. */
MK_template_mask = 0x10, /* A template specialization. */
MK_tmpl_decl_mask = 0x4, /* In decl table. */
MK_tmpl_alias_mask = 0x2, /* Also in type table */
MK_tmpl_tmpl_mask = 0x1, /* We want TEMPLATE_DECL. */
MK_type_spec = MK_template_mask,
MK_decl_spec = MK_template_mask | MK_tmpl_decl_mask,
MK_alias_spec = MK_decl_spec | MK_tmpl_alias_mask,
MK_hwm = 0x20
};
/* This is more than a debugging array. NULLs are used to determine
an invalid merge_kind number. */
static char const *const merge_kind_name[MK_hwm] =
{
"unique", "named", "field", "vtable", /* 0...3 */
"asbase", "partial", "enum", "attached", /* 4...7 */
"friend spec", "local friend", NULL, NULL, /* 8...11 */
NULL, NULL, NULL, NULL,
"type spec", "type tmpl spec", /* 16,17 type (template). */
NULL, NULL,
"decl spec", "decl tmpl spec", /* 20,21 decl (template). */
"alias spec", "alias tmpl spec", /* 22,23 alias (template). */
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
};
/* Mergeable entity location data. */
struct merge_key {
cp_ref_qualifier ref_q : 2;
unsigned index;
tree ret; /* Return type, if appropriate. */
tree args; /* Arg types, if appropriate. */
tree constraints; /* Constraints. */
merge_key ()
:ref_q (REF_QUAL_NONE), index (0),
ret (NULL_TREE), args (NULL_TREE),
constraints (NULL_TREE)
{
}
};
struct duplicate_hash : nodel_ptr_hash<tree_node>
{
#if 0
/* This breaks variadic bases in the xtreme_header tests. Since ::equal is
the default pointer_hash::equal, let's use the default hash as well. */
inline static hashval_t hash (value_type decl)
{
if (TREE_CODE (decl) == TREE_BINFO)
decl = TYPE_NAME (BINFO_TYPE (decl));
return hashval_t (DECL_UID (decl));
}
#endif
};
/* Hashmap of merged duplicates. Usually decls, but can contain
BINFOs. */
typedef hash_map<tree,uintptr_t,
simple_hashmap_traits<duplicate_hash,uintptr_t> >
duplicate_hash_map;
/* Tree stream reader. Note that reading a stream doesn't mark the
read trees with TREE_VISITED. Thus it's quite safe to have
multiple concurrent readers. Which is good, because lazy
loading. */
class trees_in : public bytes_in {
typedef bytes_in parent;
private:
module_state *state; /* Module being imported. */
vec<tree> back_refs; /* Back references. */
duplicate_hash_map *duplicates; /* Map from existings to duplicate. */
vec<tree> post_decls; /* Decls to post process. */
unsigned unused; /* Inhibit any interior TREE_USED
marking. */
public:
trees_in (module_state *);
~trees_in ();
public:
int insert (tree);
tree back_ref (int);
private:
tree start (unsigned = 0);
public:
/* Needed for binfo writing */
bool core_bools (tree);
private:
/* Stream tree_core, lang_decl_specific and lang_type_specific
bits. */
bool core_vals (tree);
bool lang_type_bools (tree);
bool lang_type_vals (tree);
bool lang_decl_bools (tree);
bool lang_decl_vals (tree);
bool lang_vals (tree);
bool tree_node_bools (tree);
bool tree_node_vals (tree);
tree tree_value ();
tree decl_value ();
tree tpl_parm_value ();
private:
tree chained_decls (); /* Follow DECL_CHAIN. */
vec<tree, va_heap> *vec_chained_decls ();
vec<tree, va_gc> *tree_vec (); /* vec of tree. */
vec<tree_pair_s, va_gc> *tree_pair_vec (); /* vec of tree_pair. */
tree tree_list (bool has_purpose);
public:
/* Read a tree node. */
tree tree_node (bool is_use = false);
private:
bool install_entity (tree decl);
tree tpl_parms (unsigned &tpl_levels);
bool tpl_parms_fini (tree decl, unsigned tpl_levels);
bool tpl_header (tree decl, unsigned *tpl_levels);
int fn_parms_init (tree);
void fn_parms_fini (int tag, tree fn, tree existing, bool has_defn);
unsigned add_indirect_tpl_parms (tree);
public:
bool add_indirects (tree);
public:
/* Serialize various definitions. */
bool read_definition (tree decl);
private:
bool is_matching_decl (tree existing, tree decl, bool is_typedef);
static bool install_implicit_member (tree decl);
bool read_function_def (tree decl, tree maybe_template);
bool read_var_def (tree decl, tree maybe_template);
bool read_class_def (tree decl, tree maybe_template);
bool read_enum_def (tree decl, tree maybe_template);
public:
tree decl_container ();
tree key_mergeable (int tag, merge_kind, tree decl, tree inner, tree type,
tree container, bool is_mod);
unsigned binfo_mergeable (tree *);
private:
uintptr_t *find_duplicate (tree existing);
void register_duplicate (tree decl, tree existing);
/* Mark as an already diagnosed bad duplicate. */
void unmatched_duplicate (tree existing)
{
*find_duplicate (existing) |= 1;
}
public:
bool is_duplicate (tree decl)
{
return find_duplicate (decl) != NULL;
}
tree maybe_duplicate (tree decl)
{
if (uintptr_t *dup = find_duplicate (decl))
return reinterpret_cast<tree> (*dup & ~uintptr_t (1));
return decl;
}
tree odr_duplicate (tree decl, bool has_defn);
public:
/* Return the next decl to postprocess, or NULL. */
tree post_process ()
{
return post_decls.length () ? post_decls.pop () : NULL_TREE;
}
private:
/* Register DECL for postprocessing. */
void post_process (tree decl)
{
post_decls.safe_push (decl);
}
private:
void assert_definition (tree, bool installing);
};
trees_in::trees_in (module_state *state)
:parent (), state (state), unused (0)
{
duplicates = NULL;
back_refs.create (500);
post_decls.create (0);
}
trees_in::~trees_in ()
{
delete (duplicates);
back_refs.release ();
post_decls.release ();
}
/* Tree stream writer. */
class trees_out : public bytes_out {
typedef bytes_out parent;
private:
module_state *state; /* The module we are writing. */
ptr_int_hash_map tree_map; /* Trees to references */
depset::hash *dep_hash; /* Dependency table. */
int ref_num; /* Back reference number. */
unsigned section;
#if CHECKING_P
int importedness; /* Checker that imports not occurring
inappropriately. +ve imports ok,
-ve imports not ok. */
#endif
public:
trees_out (allocator *, module_state *, depset::hash &deps, unsigned sec = 0);
~trees_out ();
private:
void mark_trees ();
void unmark_trees ();
public:
/* Hey, let's ignore the well known STL iterator idiom. */
void begin ();
unsigned end (elf_out *sink, unsigned name, unsigned *crc_ptr);
void end ();
public:
enum tags
{
tag_backref = -1, /* Upper bound on the backrefs. */
tag_value = 0, /* Write by value. */
tag_fixed /* Lower bound on the fixed trees. */
};
public:
bool is_key_order () const
{
return dep_hash->is_key_order ();
}
public:
int insert (tree, walk_kind = WK_normal);
private:
void start (tree, bool = false);
private:
walk_kind ref_node (tree);
public:
int get_tag (tree);
void set_importing (int i ATTRIBUTE_UNUSED)
{
#if CHECKING_P
importedness = i;
#endif
}
private:
void core_bools (tree);
void core_vals (tree);
void lang_type_bools (tree);
void lang_type_vals (tree);
void lang_decl_bools (tree);
void lang_decl_vals (tree);
void lang_vals (tree);
void tree_node_bools (tree);
void tree_node_vals (tree);
private:
void chained_decls (tree);
void vec_chained_decls (tree);
void tree_vec (vec<tree, va_gc> *);
void tree_pair_vec (vec<tree_pair_s, va_gc> *);
void tree_list (tree, bool has_purpose);
public:
/* Mark a node for by-value walking. */
void mark_by_value (tree);
public:
void tree_node (tree);
private:
void install_entity (tree decl, depset *);
void tpl_parms (tree parms, unsigned &tpl_levels);
void tpl_parms_fini (tree decl, unsigned tpl_levels);
void fn_parms_fini (tree) {}
unsigned add_indirect_tpl_parms (tree);
public:
void add_indirects (tree);
void fn_parms_init (tree);
void tpl_header (tree decl, unsigned *tpl_levels);
public:
merge_kind get_merge_kind (tree decl, depset *maybe_dep);
tree decl_container (tree decl);
void key_mergeable (int tag, merge_kind, tree decl, tree inner,
tree container, depset *maybe_dep);
void binfo_mergeable (tree binfo);
private:
bool decl_node (tree, walk_kind ref);
void type_node (tree);
void tree_value (tree);
void tpl_parm_value (tree);
public:
void decl_value (tree, depset *);
public:
/* Serialize various definitions. */
void write_definition (tree decl);
void mark_declaration (tree decl, bool do_defn);
private:
void mark_function_def (tree decl);
void mark_var_def (tree decl);
void mark_class_def (tree decl);
void mark_enum_def (tree decl);
void mark_class_member (tree decl, bool do_defn = true);
void mark_binfos (tree type);
private:
void write_var_def (tree decl);
void write_function_def (tree decl);
void write_class_def (tree decl);
void write_enum_def (tree decl);
private:
static void assert_definition (tree);
public:
static void instrument ();
private:
/* Tree instrumentation. */
static unsigned tree_val_count;
static unsigned decl_val_count;
static unsigned back_ref_count;
static unsigned null_count;
};
/* Instrumentation counters. */
unsigned trees_out::tree_val_count;
unsigned trees_out::decl_val_count;
unsigned trees_out::back_ref_count;
unsigned trees_out::null_count;
trees_out::trees_out (allocator *mem, module_state *state, depset::hash &deps,
unsigned section)
:parent (mem), state (state), tree_map (500),
dep_hash (&deps), ref_num (0), section (section)
{
#if CHECKING_P
importedness = 0;
#endif
}
trees_out::~trees_out ()
{
}
/********************************************************************/
/* Location. We're aware of the line-map concept and reproduce it
here. Each imported module allocates a contiguous span of ordinary
maps, and of macro maps. adhoc maps are serialized by contents,
not pre-allocated. The scattered linemaps of a module are
coalesced when writing. */
/* I use half-open [first,second) ranges. */
typedef std::pair<unsigned,unsigned> range_t;
/* A range of locations. */
typedef std::pair<location_t,location_t> loc_range_t;
/* Spans of the line maps that are occupied by this TU. I.e. not
within imports. Only extended when in an interface unit.
Interval zero corresponds to the forced header linemap(s). This
is a singleton object. */
class loc_spans {
public:
/* An interval of line maps. The line maps here represent a contiguous
non-imported range. */
struct span {
loc_range_t ordinary; /* Ordinary map location range. */
loc_range_t macro; /* Macro map location range. */
int ordinary_delta; /* Add to ordinary loc to get serialized loc. */
int macro_delta; /* Likewise for macro loc. */
};
private:
vec<span> *spans;
public:
loc_spans ()
/* Do not preallocate spans, as that causes
--enable-detailed-mem-stats problems. */
: spans (nullptr)
{
}
~loc_spans ()
{
delete spans;
}
public:
span &operator[] (unsigned ix)
{
return (*spans)[ix];
}
unsigned length () const
{
return spans->length ();
}
public:
bool init_p () const
{
return spans != nullptr;
}
/* Initializer. */
void init (const line_maps *lmaps, const line_map_ordinary *map);
/* Slightly skewed preprocessed files can cause us to miss an
initialization in some places. Fallback initializer. */
void maybe_init ()
{
if (!init_p ())
init (line_table, nullptr);
}
public:
enum {
SPAN_RESERVED = 0, /* Reserved (fixed) locations. */
SPAN_FIRST = 1, /* LWM of locations to stream */
SPAN_MAIN = 2 /* Main file and onwards. */
};
public:
location_t main_start () const
{
return (*spans)[SPAN_MAIN].ordinary.first;
}
public:
void open (location_t);
void close ();
public:
/* Propagate imported linemaps to us, if needed. */
bool maybe_propagate (module_state *import, location_t loc);
public:
const span *ordinary (location_t);
const span *macro (location_t);
};
static loc_spans spans;
/* Indirection to allow bsearching imports by ordinary location. */
static vec<module_state *> *ool;
/********************************************************************/
/* Data needed by a module during the process of loading. */
struct GTY(()) slurping {
/* Remap import's module numbering to our numbering. Values are
shifted by 1. Bit0 encodes if the import is direct. */
vec<unsigned, va_heap, vl_embed> *
GTY((skip)) remap; /* Module owner remapping. */
elf_in *GTY((skip)) from; /* The elf loader. */
/* This map is only for header imports themselves -- the global
headers bitmap hold it for the current TU. */
bitmap headers; /* Transitive set of direct imports, including
self. Used for macro visibility and
priority. */
/* These objects point into the mmapped area, unless we're not doing
that, or we got frozen or closed. In those cases they point to
buffers we own. */
bytes_in macro_defs; /* Macro definitions. */
bytes_in macro_tbl; /* Macro table. */
/* Location remapping. first->ordinary, second->macro. */
range_t GTY((skip)) loc_deltas;
unsigned current; /* Section currently being loaded. */
unsigned remaining; /* Number of lazy sections yet to read. */
unsigned lru; /* An LRU counter. */
public:
slurping (elf_in *);
~slurping ();
public:
/* Close the ELF file, if it's open. */
void close ()
{