toolchain/gcc-linaro-6.3.1-2017.02-x8.../share/doc/libitm.html/Internals.html

405 lines
21 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<!-- Copyright (C) 2011-2016 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License". -->
<!-- Created by GNU Texinfo 5.2, http://www.gnu.org/software/texinfo/ -->
<head>
<title>GNU libitm: Internals</title>
<meta name="description" content="GNU libitm: Internals">
<meta name="keywords" content="GNU libitm: Internals">
<meta name="resource-type" content="document">
<meta name="distribution" content="global">
<meta name="Generator" content="makeinfo">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<link href="index.html#Top" rel="start" title="Top">
<link href="Library-Index.html#Library-Index" rel="index" title="Library Index">
<link href="index.html#SEC_Contents" rel="contents" title="Table of Contents">
<link href="index.html#Top" rel="up" title="Top">
<link href="GNU-Free-Documentation-License.html#GNU-Free-Documentation-License" rel="next" title="GNU Free Documentation License">
<link href="The-libitm-ABI.html#The-libitm-ABI" rel="prev" title="The libitm ABI">
<style type="text/css">
<!--
a.summary-letter {text-decoration: none}
blockquote.smallquotation {font-size: smaller}
div.display {margin-left: 3.2em}
div.example {margin-left: 3.2em}
div.indentedblock {margin-left: 3.2em}
div.lisp {margin-left: 3.2em}
div.smalldisplay {margin-left: 3.2em}
div.smallexample {margin-left: 3.2em}
div.smallindentedblock {margin-left: 3.2em; font-size: smaller}
div.smalllisp {margin-left: 3.2em}
kbd {font-style:oblique}
pre.display {font-family: inherit}
pre.format {font-family: inherit}
pre.menu-comment {font-family: serif}
pre.menu-preformatted {font-family: serif}
pre.smalldisplay {font-family: inherit; font-size: smaller}
pre.smallexample {font-size: smaller}
pre.smallformat {font-family: inherit; font-size: smaller}
pre.smalllisp {font-size: smaller}
span.nocodebreak {white-space:nowrap}
span.nolinebreak {white-space:nowrap}
span.roman {font-family:serif; font-weight:normal}
span.sansserif {font-family:sans-serif; font-weight:normal}
ul.no-bullet {list-style: none}
-->
</style>
</head>
<body lang="en" bgcolor="#FFFFFF" text="#000000" link="#0000FF" vlink="#800080" alink="#FF0000">
<a name="Internals"></a>
<div class="header">
<p>
Next: <a href="GNU-Free-Documentation-License.html#GNU-Free-Documentation-License" accesskey="n" rel="next">GNU Free Documentation License</a>, Previous: <a href="The-libitm-ABI.html#The-libitm-ABI" accesskey="p" rel="prev">The libitm ABI</a>, Up: <a href="index.html#Top" accesskey="u" rel="up">Top</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Library-Index.html#Library-Index" title="Index" rel="index">Index</a>]</p>
</div>
<hr>
<a name="Internals-1"></a>
<h2 class="chapter">4 Internals</h2>
<a name="TM-methods-and-method-groups"></a>
<h3 class="section">4.1 TM methods and method groups</h3>
<p>libitm supports several ways of synchronizing transactions with each other.
These TM methods (or TM algorithms) are implemented in the form of
subclasses of <code>abi_dispatch</code>, which provide methods for
transactional loads and stores as well as callbacks for rollback and commit.
All methods that are compatible with each other (i.e., that let concurrently
running transactions still synchronize correctly even if different methods
are used) belong to the same TM method group. Pointers to TM methods can be
obtained using the factory methods prefixed with <code>dispatch_</code> in
<samp>libitm_i.h</samp>. There are two special methods, <code>dispatch_serial</code> and
<code>dispatch_serialirr</code>, that are compatible with all methods because they
run transactions completely in serial mode.
</p>
<a name="TM-method-life-cycle"></a>
<h4 class="subsection">4.1.1 TM method life cycle</h4>
<p>The state of TM methods does not change after construction, but they do alter
the state of transactions that use this method. However, because
per-transaction data gets used by several methods, <code>gtm_thread</code> is
responsible for setting an initial state that is useful for all methods.
After that, methods are responsible for resetting/clearing this state on each
rollback or commit (of outermost transactions), so that the transaction
executed next is not affected by the previous transaction.
</p>
<p>There is also global state associated with each method group, which is
initialized and shut down (<code>method_group::init()</code> and <code>fini()</code>)
when switching between method groups (see <samp>retry.cc</samp>).
</p>
<a name="Selecting-the-default-method"></a>
<h4 class="subsection">4.1.2 Selecting the default method</h4>
<p>The default method that libitm uses for freshly started transactions (but
not necessarily for restarted transactions) can be set via an environment
variable (<code>ITM_DEFAULT_METHOD</code>), whose value should be equal to the name
of one of the factory methods returning abi_dispatch subclasses but without
the &quot;dispatch_&quot; prefix (e.g., &quot;serialirr&quot; instead of
<code>GTM::dispatch_serialirr()</code>).
</p>
<p>Note that this environment variable is only a hint for libitm and might not
be supported in the future.
</p>
<a name="Nesting_003a-flat-vs_002e-closed"></a>
<h3 class="section">4.2 Nesting: flat vs. closed</h3>
<p>We support two different kinds of nesting of transactions. In the case of
<em>flat nesting</em>, the nesting structure is flattened and all nested
transactions are subsumed by the enclosing transaction. In contrast,
with <em>closed nesting</em>, nested transactions that have not yet committed
can be rolled back separately from the enclosing transactions; when they
commit, they are subsumed by the enclosing transaction, and their effects
will be finally committed when the outermost transaction commits.
<em>Open nesting</em> (where nested transactions can commit independently of the
enclosing transactions) are not supported.
</p>
<p>Flat nesting is the default nesting mode, but closed nesting is supported and
used when transactions contain user-controlled aborts
(<code>__transaction_cancel</code> statements). We assume that user-controlled
aborts are rare in typical code and used mostly in exceptional situations.
Thus, it makes more sense to use flat nesting by default to avoid the
performance overhead of the additional checkpoints required for closed
nesting. User-controlled aborts will correctly abort the innermost enclosing
transaction, whereas the whole (i.e., outermost) transaction will be restarted
otherwise (e.g., when a transaction encounters data conflicts during
optimistic execution).
</p>
<a name="Locking-conventions"></a>
<h3 class="section">4.3 Locking conventions</h3>
<p>This section documents the locking scheme and rules for all uses of locking
in libitm. We have to support serial(-irrevocable) mode, which is implemented
using a global lock as explained next (called the <em>serial lock</em>). To
simplify the overall design, we use the same lock as catch-all locking
mechanism for other infrequent tasks such as (de)registering clone tables or
threads. Besides the serial lock, there are <em>per-method-group locks</em> that
are managed by specific method groups (i.e., groups of similar TM concurrency
control algorithms), and lock-like constructs for quiescence-based operations
such as ensuring privatization safety.
</p>
<p>Thus, the actions that participate in the libitm-internal locking are either
<em>active transactions</em> that do not run in serial mode, <em>serial
transactions</em> (which (are about to) run in serial mode), and management tasks
that do not execute within a transaction but have acquired the serial mode
like a serial transaction would do (e.g., to be able to register threads with
libitm). Transactions become active as soon as they have successfully used the
serial lock to announce this globally (see <a href="#serial_002dlock_002dimpl">Serial lock
implementation</a>). Likewise, transactions become serial transactions as soon as
they have acquired the exclusive rights provided by the serial lock (i.e.,
serial mode, which also means that there are no other concurrent active or
serial transactions). Note that active transactions can become serial
transactions when they enter serial mode during the runtime of the
transaction.
</p>
<a name="State_002dto_002dlock-mapping"></a>
<h4 class="subsection">4.3.1 State-to-lock mapping</h4>
<p>Application data is protected by the serial lock if there is a serial
transaction and no concurrently running active transaction (i.e., non-serial).
Otherwise, application data is protected by the currently selected method
group, which might use per-method-group locks or other mechanisms. Also note
that application data that is about to be privatized might not be allowed to be
accessed by nontransactional code until privatization safety has been ensured;
the details of this are handled by the current method group.
</p>
<p>libitm-internal state is either protected by the serial lock or accessed
through custom concurrent code. The latter applies to the public/shared part
of a transaction object and most typical method-group-specific state.
</p>
<p>The former category (protected by the serial lock) includes:
</p><ul>
<li> The list of active threads that have used transactions.
</li><li> The tables that map functions to their transactional clones.
</li><li> The current selection of which method group to use.
</li><li> Some method-group-specific data, or invariants of this data. For example,
resetting a method group to its initial state is handled by switching to the
same method group, so the serial lock protects such resetting as well.
</li></ul>
<p>In general, such state is immutable whenever there exists an active
(non-serial) transaction. If there is no active transaction, a serial
transaction (or a thread that is not currently executing a transaction but has
acquired the serial lock) is allowed to modify this state (but must of course
be careful to not surprise the current method group&rsquo;s implementation with such
modifications).
</p>
<a name="Lock-acquisition-order"></a>
<h4 class="subsection">4.3.2 Lock acquisition order</h4>
<p>To prevent deadlocks, locks acquisition must happen in a globally agreed-upon
order. Note that this applies to other forms of blocking too, but does not
necessarily apply to lock acquisitions that do not block (e.g., trylock()
calls that do not get retried forever). Note that serial transactions are
never return back to active transactions until the transaction has committed.
Likewise, active transactions stay active until they have committed.
Per-method-group locks are typically also not released before commit.
</p>
<p>Lock acquisition / blocking rules:
</p><ul>
<li> Transactions must become active or serial before they are allowed to
use method-group-specific locks or blocking (i.e., the serial lock must be
acquired before those other locks, either in serial or nonserial mode).
</li><li> Any number of threads that do not currently run active transactions can
block while trying to get the serial lock in exclusive mode. Note that active
transactions must not block when trying to upgrade to serial mode unless there
is no other transaction that is trying that (the latter is ensured by the
serial lock implementation.
</li><li> Method groups must prevent deadlocks on their locks. In particular, they
must also be prepared for another active transaction that has acquired
method-group-specific locks but is blocked during an attempt to upgrade to
being a serial transaction. See below for details.
</li><li> Serial transactions can acquire method-group-specific locks because there
will be no other active nor serial transaction.
</li></ul>
<p>There is no single rule for per-method-group blocking because this depends on
when a TM method might acquire locks. If no active transaction can upgrade to
being a serial transaction after it has acquired per-method-group locks (e.g.,
when those locks are only acquired during an attempt to commit), then the TM
method does not need to consider a potential deadlock due to serial mode.
</p>
<p>If there can be upgrades to serial mode after the acquisition of
per-method-group locks, then TM methods need to avoid those deadlocks:
</p><ul>
<li> When upgrading to a serial transaction, after acquiring exclusive rights
to the serial lock but before waiting for concurrent active transactions to
finish (see <a href="#serial_002dlock_002dimpl">Serial lock implementation</a> for details),
we have to wake up all active transactions waiting on the upgrader&rsquo;s
per-method-group locks.
</li><li> Active transactions blocking on per-method-group locks need to check the
serial lock and abort if there is a pending serial transaction.
</li><li> Lost wake-ups have to be prevented (e.g., by changing a bit in each
per-method-group lock before doing the wake-up, and only blocking on this lock
using a futex if this bit is not group).
</li></ul>
<p><strong>TODO</strong>: Can reuse serial lock for gl-*? And if we can, does it make
sense to introduce further complexity in the serial lock? For gl-*, we can
really only avoid an abort if we do -wb and -vbv.
</p>
<a name="Serial-lock-implementation"></a>
<h4 class="subsection">4.3.3 Serial lock implementation</h4>
<a name="serial_002dlock_002dimpl"></a>
<p>The serial lock implementation is optimized towards assuming that serial
transactions are infrequent and not the common case. However, the performance
of entering serial mode can matter because when only few transactions are run
concurrently or if there are few threads, then it can be efficient to run
transactions serially.
</p>
<p>The serial lock is similar to a multi-reader-single-writer lock in that there
can be several active transactions but only one serial transaction. However,
we do want to avoid contention (in the lock implementation) between active
transactions, so we split up the reader side of the lock into per-transaction
flags that are true iff the transaction is active. The exclusive writer side
remains a shared single flag, which is acquired using a CAS, for example.
On the fast-path, the serial lock then works similar to Dekker&rsquo;s algorithm but
with several reader flags that a serial transaction would have to check.
A serial transaction thus requires a list of all threads with potentially
active transactions; we can use the serial lock itself to protect this list
(i.e., only threads that have acquired the serial lock can modify this list).
</p>
<p>We want starvation-freedom for the serial lock to allow for using it to ensure
progress for potentially starved transactions (see <a href="#progress_002dguarantees">Progress Guarantees</a> for details). However, this is currently not enforced by
the implementation of the serial lock.
</p>
<p>Here is pseudo-code for the read/write fast paths of acquiring the serial
lock (read-to-write upgrade is similar to write_lock:
</p><div class="example">
<pre class="example">// read_lock:
tx-&gt;shared_state |= active;
__sync_synchronize(); // or STLD membar, or C++0x seq-cst fence
while (!serial_lock.exclusive)
if (spinning_for_too_long) goto slowpath;
// write_lock:
if (CAS(&amp;serial_lock.exclusive, 0, this) != 0)
goto slowpath; // writer-writer contention
// need a membar here, but CAS already has full membar semantics
bool need_blocking = false;
for (t: all txns)
{
for (;t-&gt;shared_state &amp; active;)
if (spinning_for_too_long) { need_blocking = true; break; }
}
if (need_blocking) goto slowpath;
</pre></div>
<p>Releasing a lock in this spin-lock version then just consists of resetting
<code>tx-&gt;shared_state</code> to inactive or clearing <code>serial_lock.exclusive</code>.
</p>
<p>However, we can&rsquo;t rely on a pure spinlock because we need to get the OS
involved at some time (e.g., when there are more threads than CPUs to run on).
Therefore, the real implementation falls back to a blocking slow path, either
based on pthread mutexes or Linux futexes.
</p>
<a name="Reentrancy"></a>
<h4 class="subsection">4.3.4 Reentrancy</h4>
<p>libitm has to consider the following cases of reentrancy:
</p><ul>
<li> Transaction calls unsafe code that starts a new transaction: The outer
transaction will become a serial transaction before executing unsafe code.
Therefore, nesting within serial transactions must work, even if the nested
transaction is called from within uninstrumented code.
</li><li> Transaction calls either a transactional wrapper or safe code, which in
turn starts a new transaction: It is not yet defined in the specification
whether this is allowed. Thus, it is undefined whether libitm supports this.
</li><li> Code that starts new transactions might be called from within any part
of libitm: This kind of reentrancy would likely be rather complex and can
probably be avoided. Therefore, it is not supported.
</li></ul>
<a name="Privatization-safety"></a>
<h4 class="subsection">4.3.5 Privatization safety</h4>
<p>Privatization safety is ensured by libitm using a quiescence-based approach.
Basically, a privatizing transaction waits until all concurrent active
transactions will either have finished (are not active anymore) or operate on
a sufficiently recent snapshot to not access the privatized data anymore. This
happens after the privatizing transaction has stopped being an active
transaction, so waiting for quiescence does not contribute to deadlocks.
</p>
<p>In method groups that need to ensure publication safety explicitly, active
transactions maintain a flag or timestamp in the public/shared part of the
transaction descriptor. Before blocking, privatizers need to let the other
transactions know that they should wake up the privatizer.
</p>
<p><strong>TODO</strong> Ho to implement the waiters? Should those flags be
per-transaction or at a central place? We want to avoid one wake/wait call
per active transactions, so we might want to use either a tree or combining
to reduce the syscall overhead, or rather spin for a long amount of time
instead of doing blocking. Also, it would be good if only the last transaction
that the privatizer waits for would do the wake-up.
</p>
<a name="Progress-guarantees"></a>
<h4 class="subsection">4.3.6 Progress guarantees</h4>
<a name="progress_002dguarantees"></a>
<p>Transactions that do not make progress when using the current TM method will
eventually try to execute in serial mode. Thus, the serial lock&rsquo;s progress
guarantees determine the progress guarantees of the whole TM. Obviously, we at
least need deadlock-freedom for the serial lock, but it would also be good to
provide starvation-freedom (informally, all threads will finish executing a
transaction eventually iff they get enough cycles).
</p>
<p>However, the scheduling of transactions (e.g., thread scheduling by the OS)
also affects the handling of progress guarantees by the TM. First, the TM
can only guarantee deadlock-freedom if threads do not get stopped. Likewise,
low-priority threads can starve if they do not get scheduled when other
high-priority threads get those cycles instead.
</p>
<p>If all threads get scheduled eventually, correct lock implementations will
provide deadlock-freedom, but might not provide starvation-freedom. We can
either enforce the latter in the TM&rsquo;s lock implementation, or assume that
the scheduling is sufficiently random to yield a probabilistic guarantee that
no thread will starve (because eventually, a transaction will encounter a
scheduling that will allow it to run). This can indeed work well in practice
but is not necessarily guaranteed to work (e.g., simple spin locks can be
pretty efficient).
</p>
<p>Because enforcing stronger progress guarantees in the TM has a higher runtime
overhead, we focus on deadlock-freedom right now and assume that the threads
will get scheduled eventually by the OS (but don&rsquo;t consider threads with
different priorities). We should support starvation-freedom for serial
transactions in the future. Everything beyond that is highly related to proper
contention management across all of the TM (including with TM method to
choose), and is future work.
</p>
<p><strong>TODO</strong> Handling thread priorities: We want to avoid priority inversion
but it&rsquo;s unclear how often that actually matters in practice. Workloads that
have threads with different priorities will likely also require lower latency
or higher throughput for high-priority threads. Therefore, it probably makes
not that much sense (except for eventual progress guarantees) to use
priority inheritance until the TM has priority-aware contention management.
</p>
<hr>
<div class="header">
<p>
Next: <a href="GNU-Free-Documentation-License.html#GNU-Free-Documentation-License" accesskey="n" rel="next">GNU Free Documentation License</a>, Previous: <a href="The-libitm-ABI.html#The-libitm-ABI" accesskey="p" rel="prev">The libitm ABI</a>, Up: <a href="index.html#Top" accesskey="u" rel="up">Top</a> &nbsp; [<a href="index.html#SEC_Contents" title="Table of contents" rel="contents">Contents</a>][<a href="Library-Index.html#Library-Index" title="Index" rel="index">Index</a>]</p>
</div>
</body>
</html>