\documentstyle[epsf,ijcai97]{article}
%\setlength\titlebox{1.9in}
%%\setlength\textheight{9.8in}%9.375
\setlength{\topmargin}{0.05in}
\setlength{\textheight}{8.75in}
% Note: these are commands specifically defined in local.sty
%\openlinepar
%\twocolumn
%\psdirectories{figs,icmfigs,dadsmfig,c:/text/dadsm-96/figs}
\title{\bf \vspace{-0.3in} Managing Large Scale Computational Markets }
\author{\textbf{Arne Andersson} \\
Department of Computer Science \\
Lund University \\
221 00 Lund, Sweden \\
Arne.Andersson@dna.lth.se \\
http://www.dna.lth.se/home/Arne\_Andersson/
\And
\textbf{Fredrik Ygge} \\
EnerSearch and \\
IDE at University of Ronneby \\
372 25 Ronneby, Sweden \\
Fredrik.Ygge@enersearch.se \\
http://www.enersearch.se/$\ \tilde{} \ $ygge
}
\date{}
\newtheorem{definition}{Definition.}[section]
\newtheorem{axiom}{Axiom}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{algorithm}{Algorithm}[section]
\newenvironment{example}{\noindent {\bf Example. }}{}
\newenvironment{proof}{\scriptsize {\bf Proof. }}{${}_\Box$ \vspace{0.11in} \\}
\newenvironment{counterexample}{{\bf Counterexample. }}{}
\begin{document}
\maketitle
%I don't know why I have to reset thispagesyle, but otherwise get page
%numbers
\thispagestyle{empty}
\subsection*{\centering Abstract}
{\em
General equilibrium theory has been proposed for resource allocation in computational markets. The basic procedure is that agents submit bids and
that a resource (re)allocation is performed when a set of prices (one for
each commodity) is found such that supply meets demand for each commodity. For
successful implementation of large markets based on general equilibrium theory, efficient algorithms for finding the equilibrium are required.
We discuss some drawbacks of current algorithms for large scale equilibrium markets and present a novel distributed algorithm, \textsc{CoTree}, which deals with the most important problems. \textsc{CoTree} is communication sparse, fast in adapting to preference changes of a few agents, have minimal requirements on local data, and is easy to implement.
} %emph
\section{Introduction}
\noindent
In our human society, resource (re)allocations are in most cases
performed through markets. This occurs on many
different levels and in many different scales, from our daily grocery
shopping to large trades between big companies and/or nations. The market
approach to resource allocation in the human society has inspired the
Multi-Agent Systems (MAS)
community to construct similar concepts for MAS, where the trade is
performed between computational agents on \emph{computational markets}.
Wellman \cite{Wellman:93a} refers to this as \emph{market oriented programming}.
In computational markets, a common approach is to use a
mechanism that obtains \emph{general equilibrium}, as done by Cheng and
Wellman~\cite{Cheng:97a}. This is also the market approach investigated
here. General equilibrium is obtained when a set of prices (one price for each commodity)
is found such that supply meets demand for each commodity and where the
agents optimize their use of resource at the current price level~\cite{Mas-Colell:95a,Varian:96a}. In
this paper we only describe pure exchange markets, but all concepts
here are also well suited for the incorporation of production.
For making market-oriented programming successful, it is of vital
importance to incorporate available knowledge from economic theory. At the same time it
is important to study efficient implementations. In previous work \cite{Ygge:96a} we showed how
numerical analysis and mathematical optimization could be utilized to
implement efficient markets with a reasonable
number of agents (up to $1000$), but scaling
up to huge systems in highly distributed environments introduced some
unanticipated difficulties. In this paper we discuss the difficulties of scaling up to
large markets. We then introduce a novel algorithm, \textsc{CoTree}, which
deals with the most important problems. Our research aims at implementing the application power load management~\cite{Ygge:96a} as a computational market, but all results in this paper are general and application independent.
The paper is arranged as follows. In Section~\ref{sec:problem} we discuss
some basic economic concepts and important characteristics of two general approaches to
finding the equilibrium.
The drawbacks of current
methods in the context of large-scale distributed multi-agent systems are
discussed in Section~\ref{sec:drawbacks};
in particular problems caused by too much communication.
Then, in Section~\ref{sec:cotree} we present our new method that aims
at overcome these problems. We then demonstrate the dynamics of the method
once an initial
equilibrium has been found (Section~\ref{sec:dynamics}). Some experimental
data is given in Section~\ref{sec:experiments}. The multi-commodity case is
discussed in Section~\ref{sec:multi}, and, finally, Section~\ref{sec:conclusions} concludes.
\section{Context and Problem}
\label{sec:problem}
\subsection{Basic Micro-Economic Concepts}
\noindent
We analyze a market with two commodities. (See Section~\ref{sec:multi}
for a discussion on how our results can be extended for
situations with more than two commodities.)
An agent's net demand describes how much it is willing to buy at a
specific price in terms of the other commodity. We denote the net demand of agent $i$ for commodity one by
$z_i(p)$, where $p$ is the price for commodity one in terms of commodity two. The
\emph{aggregate excess demand}, $z(p)$, is defined as the sum of the
net demands of all agents, i.e.
\begin{equation}
z(p) = \sum_{i=1}^{n}z_{i}(p),
\label{eq:excessDemand}
\end{equation}
where $n$ is the number of agents.
Then market equilibrium is given by
\begin{equation}
z(p) = 0.
\label{eq:priceEq}
\end{equation}
Instead of asking an agent how much it is willing to buy
or sell at a specific price, one might ask it how much it is willing to pay
for an infinitesimal amount of the commodity at the current allocation (a
price). That is, each agent, $i$, can be viewed as holding a price function $p_{i}(z_i)$,
rather than a demand function $z_i(p)$. If the net demand is monotone and continuous, there is a
bijective mapping between $p_{i}(z_i)$ and $z_i(p)$, i.e. $p_{i}(z_i)$ tells
what price would lead to a certain demand. Then Eq.~(\ref{eq:priceEq}) corresponds to
\begin{equation}
\left\{
\begin{array}{ll}
p_i(z_i) \leq p, & z_i = z_i^l \\
p_i(z_i) = p, & z_i^l < z_i < z_i^u \\
p_i(z_i) \geq p, & z_i = z_i^u
\end{array}
\right. ,
\label{eq:resEq}
\end{equation}
where $n$ is the number of agents and $z_i^l$ and $z_i^u$ are agent $i$'s lower and upper limit of the net demand.
\subsection{The Computational Problem}
\noindent
In this paper we investigate the standard case were the net demand is continuous and
monotonically decreasing with price. If
we can solve a market problem with such a demand, we can
solve a separable resource allocation problem with concave objective
functions~\cite{Ygge:97MAAMAW}.
The computational problem investigated is to solve Eq.~(\ref{eq:priceEq}) in a
multi-agent setting, i.e. in a setting where the information about the net
demand (and/or price) is computed from local information by local
agents.
For a (re)allocation to be performed the system must 1) compute the
equilibrium solution and 2) notify the agents about their new allocation.
Depending on the setting, the difficulty of the second step will vary
significantly. In this paper we deal only with step one, as step two is too application dependent to be discussed in general terms. In some settings, the task of reallocating commodities can be completely manual, i.e. no computerized communication will take place after the equilibrium has been found. In another setting, it might be enough to announce the equilibrium price and let every agent autonomously give and take resources as declared by their bids. In yet another situation the agents may exchange digital cash for an encrypted piece of information.
\subsection{Two General Approaches for Finding Equilibrium}
\noindent
When implementing a market mechanism for achieving general equilibrium
there are basically two alternatives: either the price is used as
the free parameter, and $p$ is updated until Eq.~(\ref{eq:priceEq})
holds, or the resource is used as the free parameter and different
transfers of resource from one agent to another are evaluated until
Eq.~(\ref{eq:resEq}) holds.
This classification leads to the notion of \emph{price-} vs.
\emph{resource-oriented} approaches. The conditions in Eq.~(\ref{eq:priceEq})
and Eq.~(\ref{eq:resEq}), together with the corresponding market mechanism, are
different ways of saying the same thing. In the first case the method
itself guarantees that the price of each commodity is equal for every agent, and
in the second case the allocation is always feasible, since resource is
only transferred between agents.
\subsection{Characteristics of the Different Approaches}
\label{sec:precision}
\noindent
Above, we argued that the equilibrium conditions
for the price-oriented approach and the resource
oriented approach,
Eq.~(\ref{eq:priceEq}) and Eq.~(\ref{eq:resEq}),
are equivalent. This is true if
the conditions are \emph{exactly} fulfilled.
However, in practice, for the
algorithms to converge in reasonable time,
termination conditions for the
algorithms will be $\left| z_{j}(p)\right| \leq \epsilon$ and
$\left| p_{ij}(x_{i}) - p_{mj}(x_{m})\right|
\leq \epsilon$ respectively, where $\epsilon$
is a small positive constant.
In the resource-oriented case, this
means that the allocation
is \emph{not perfectly fair}, i.e. some agents
will pay slightly less than they would have
done on a perfect market, while
others will pay slightly more.
In the price-oriented case, on the other
hand, the allocation is \emph{not perfectly
feasible}. This is an important
difference; in the latter case it can be impossible
to allocate the computed amount to the
involved agents, due to physical constraints. In practice this need not
be a problem, but if
$\epsilon$ is chosen to be relatively
large, e.g. for performance reasons,
and if many consecutive trades are performed,
this should be considered.
Another important difference is the input to the two approaches. In the price-oriented case the input is the demand function and in the resource-oriented case the input is the price function. The price function is more closely related to the utility function (it is merely the first derivatives of the utility function with respect to the first commodity divided by the first derivatives of the utility function with respect to the second commodity) which is the primary property in micro-economics (e.g.,
~\cite{Varian:96a}) while the demand has to be derived from function inversion.
\section{Drawbacks of Current Algorithms}
\label{sec:drawbacks}
\noindent
Even though some of the available algorithms scales nicely with the
number of agents when run on a single host,
managing a huge number of agents
and running them in highly distributed environments
introduces problems. We
showed that managing 1000 agents on a single host
causes no problems \cite{Ygge:96a}, but scaling up
to, say, a million distributed agents is not easily managed.
In our application area, power load management \cite{Ygge:96a}, the local
area networks normally have low communication costs, while the wide area networks have rather high
communication costs. We believe that this situation with highly interconnected sub-groups, and somewhat looser connections between the sub-groups is very common for other application areas as well.
We recognize some important problems with traditional algorithms. These problems are relevant not only for our application but for distributed markets in general.
\begin{itemize}
\item As the number of agents grows, the number of messages received and sent out
by the global auctioneer will be very large for each iteration. The work needed to process these messages
at a single site will create a bottleneck.
This suggests that the task of sending, receiving, and processing messages should be distributed among more than one auctioneer.
\item Even if multiple auctioneers are used, if data is communicated at each step in an iterative algorithm, the system will have to wait for the {\em slowest} agent to communicate.
This is likely to cause significant waiting
time in practice, since
\begin{itemize}
\item some agents might have rather slow
communication, like in our application area where some of them are communicating via cellular
telephones or over the electric power line.
\item even for devices that have fast communication
according to today's standard, there is a substantial
variation in response time, and when communicating
with many agents it is likely that the maximum
delay time will be large.
\end{itemize}
This suggests that the number of iterations that requires communication
to reach equilibrium should be kept small.
\item Within a fairly large range, the cost for sending a
message is more or less independent of the size of the message.
Furthermore, the cost for processing the sent/received message
is normally considerably smaller than
the waiting time. Thus,
instead of sending a message
containing just one number -- as suggested in, e.g., ~\cite{Kurose:89a} and ~\cite{Ygge:96a} -- one can send a set of, say, $25$ or even a few hundred numbers at essentially the same cost.
This suggests that one should
trade message size for iterations, {\em even if the
total amount of communicated information grows}. Furthermore, the fact that
the size of message headers and footers can be considerable compared to the
net data when the messages are small, may cause the actual amount of
data sent on the network to be \emph{smaller} when few iterations are
needed even if the amount of net data communicated increases. (See also~\cite{Cheng:97a}.)
\item If only the preferences of a few agents change, search for the new
equilibrium from scratch should not be required.
This suggests that information from previous computations should be stored
and reused.
\end{itemize}
We would also like the algorithm to put minimum requirements on local data, i.e. not be dependent on
properties such as the derivative of net demands and prices. Furthermore, it is important that the algorithm is numerically stable, i.e. be guaranteed to find the equilibrium as long as the requirements (e.g. that $z$ is continuous and decreasing) are fulfilled, while at the same time being computationally efficient.
Such an algorithm is presented in the next section.
\section{The {\sc CoTree} Algorithm}
\label{sec:cotree}
\noindent
The \textsc{CoTree} algorithm (\textsc{Co}mbinatoric \textsc{Tree}) is a
hierarchical approach to computing the equilibrium. Its principles
and properties are described in this section.
\subsection{Basic Principles}
\noindent
We let the hosts in the system form a logical tree.\footnote{The hosts need not be physically coupled in a tree structure, though this is an advantage. For our application of power load management, they will physically form a tree.} An example of such a tree is shown in Figure~\ref{fig:hosts}. The basic idea is that at each level in the tree, a compound demand function (if a price-oriented approach is used) or price function (if a resource-oriented approach is used) is constructed. The compound function is a sampled function with $k$ samples and it is used as the input to the computation on the next higher level in the tree. Each host receives one message from each of its $n$ children and produces a sample of $k$ samples.\footnote{In this paper we (though this is not necessary) only discuss the use of one $k$ everywhere in the system.} Thus, the required data is communicated in one signle round and the number of messages required is proportional to the number of hosts.
\begin{figure}[!hbt]
\epsfxsize=2in
\hspace*{\fill}
\epsffile{figs/hosts.eps}
\hspace*{\fill}
\caption{\it A logical tree of hosts.}
\label{fig:hosts}
\end{figure}
The basic operation in \textsc{CoTree} for the computation of the compound function is the pair-wise aggregation of preferences, see Figure~\ref{fig:basop}. Consequently, each host holds a binary tree where each leaf represent the preferences of each of its children in the host tree and the root node represents the compound function for all of its children. When the compound function of one host is send to the next level, it will be represented by a leaf at that level. Thus, the entire system can be viewed as a binary tree with all consumers and producers as leafs. We refer to the non-leaf nodes as \emph{auctioneers}. The root auctioneer will hold the aggregate preferences for the entire system. Each auctioneer holds three vectors with information about its children: one compound demand or price function and two allocation vectors telling how much was assigned to each of the two children at each of the samples in the compound function.
\begin{figure}[!hbt]
\epsfxsize=2in
\hspace*{\fill}
\epsffile{figs/basop.eps}
\hspace*{\fill}
\caption{\it The basic operation of \textsc{CoTree}: pair-wise aggregation.}
\label{fig:basop}
\end{figure}
\subsection{The Resource-Oriented Case}
\label{sec:resOverview}
\noindent
In the resource-oriented case the compound function is a price function which is computed as follows. Say that we are to compute the aggregated price, $p(3)$, that we in the previous iteration computed $p(2)$, and that the allocation to the left and right child at a total allocation of $2$ was $1.5$ and $0.5$. Now since we know that the prices are monotonically decreasing with demand (since we had that the demand is continuous and decreasing with price as a precondition), we know that the only possible allocations for the left child is from $1.5$ to $2.5$ and, for the right child, $0.5$ to $1.5$. With \textsc{CoTree} we first try $\langle 1.5, 0.5 \rangle$ and then $\langle 2.5, 1.5 \rangle$. If none of these were the equilibrium solution we interpolate from these end points. This results in a constant number of operations for each slot of the compound vector. The computation has to be repeated for each auctioneer on the host. If all leafs are managed by one host the complexity is ${\cal O}(nk)$, where $k$ is the number of slots and $n$ is the number of leafs. If each host only manages two children, the complexity is ${\cal O}(k \log n)$. Further details are given in the Appendix.
\subsection{The Price-Oriented Case}
\label{sec:price}
\noindent
\textsc{CoTree}, as presented above, is a resource-oriented scheme, but a
similar price-oriented approach is also conceivable. This would mean that
every auctioneer, instead of holding a price vector telling how
much its
children are willing to pay for an infenitecimal amount of resource at
certain allocations, would hold a net demand vector, telling how much its
children would change their allocation at different prices.
The advantage
of such an approach is that the net demand vector of an auctioneer is
simply the sum of the net demand vectors of its children. The complexity of this is ${\cal O}(nk)$, i.e. the same scaling with accuracy and number of children as with the resource-oriented approach. However, the computations here are simpler, and hereby the price-oriented approach would be faster. This assumes that the preferences of the consumers and producers are given as demand functions.
The disadvantage is that it results in an allocation which is not perfectly feasible. For cases where this is not acceptable, one could think of ways to assure that exactly the resource that is
allocated to one auctioneer is allocated to its children. Then some computations would be required when computing the allocations rather than when computing the equilibrium (as was done in the resource-oriented case). For our
application area, the computation time required for computing the
equilibrium (as will be described in Section~\ref{sec:experiments}) is totally neglectable compared to the communication time. We believe this to be true for most other applications areas as well.
Thus, whether a resource-oriented or price-oriented
approach is chosen is of minor importance.
\section{The Dynamic Situation}
\label{sec:dynamics}
\noindent
As discussed in the previous section, \textsc{CoTree} has the advantage of being communication sparse during the initial equilibrium computations. However, the major advantages of \textsc{CoTree} occur in the dynamic situations where an equilibrium has been
computed once and then some agents change their preferences. This section
discuss those issues. The discussion is relevant both for the resource- and
price-oriented approaches.
\subsection{The Change of an Agent's Preferences}
\label{sec:singleChange}
\noindent
If the preferences of only one agent change and everything else remains
unchanged, the computations only have to be performed along the path from
this agent to the root to compute the new equilibrium allocation.
Let $d$ and $n$ denote the number of children per host, and the
number of leaves respectively. The tree of hosts has height
${\cal O} \left( \frac{\log{n}}{\log{d}} \right)$. As an example, assume that $d=100$ and
$n=1000000$. Then the height of the three is $3$. If the preferences of one
agent changes, only $3$ messages are required to compute the new
equilibrium. This is in bright contrast to methods in which the aggregate functions are not cashed where one
million messages would have to be sent if the agents submit entire
functions and one million times the number of iterations if the agents only
submit single information items. (Specifically, if single information items are used, such as the demand and its derivative at a certain price, it is impossible to cash data so that only three messages are required.)
We also note that if we have a large consumer or producer whose preferences
change frequently, it is wise to put this agent close to the root, in order to minimize communication. In particular, if the agent representing such a consumer or producer resides on the same host
as the root auctioneer, \emph{no} inter-host communication is required for obtaining the new equilibrium. We note that here the computation of the new equilibrium is close to instant, while the traditional methods typically require recomputation from scratch.
\subsection{When to Perform a Recomputation}
\noindent
For some applications it might be the case that the recomputations are
performed periodically regardless of whether or not really
needed. In other applications, recomputations might be performed when
needed. We investigate the latter case here, and it turns out to be a
delicate issue.
If the preferences change only for one agent, it is possible that
the reallocation resulting from a total recomputation of the equilibrium is
so small that it is not worth the efforts, even if the preference change
seems quite large from the local perspective. At the same time, it is possible that
there is a quite small change of \emph{every} agent's preferences,
which would lead to significant reallocations for the group as a whole.
Thus, it is not possible to tell from the local perspective whether
or not a recomputation is needed. The same holds at the global level; we can
not tell if a recomputation is required without asking many local agents.
In an approximation scheme, we expect the equilibrium
to be close to the true equilibrium without necessarily being exactly
correct. Hence, after a small change of a few agent's prefernces it may not be
necessary to change the (global) price/allocation at all.
In this case, we would like the recomputation of price/allocation to "fade
out" after very little communication as it is propagating upwards in the tree.
One way to implement this is to assign a permitted deviation at
each level in the tree. Assume that we have a resource-oriented \textsc{CoTree}
where we allow the values
in the global price vector to deviate from the correct values with
a maximum value of $\delta$. We can then spread this tolerance out
on the auctioneer in the tree. Each auctioneer $i$ is assigned a tolerance
$\delta_i$, such that the total tolerance sums to $\delta$.
Exactly how the tolerances
should be spread is an intriguing topic that requires a number of
assumptions
on the sizes and update frequencies in the individual agents' preferences.
Once the tolerances have been assigned, each auctioneer keeps track
of the difference between its current price vector and the price vector
that was last sent to the parent. As long as this difference is less
than (or equal to) $\delta_i$, no message is sent to the parent. In this
way, small local changes will not propagate up to the root.
It should be noted that if the changes in preferences can be seen as random, the
changes are likely to even out as they propagate upwards in the tree.
In all, this is a non-trivial question and more research is needed here in
order to obtain a good method. This problem is general and not unique to
\textsc{CoTree}. We do however think, as indicated above, that one can
benefit from using a tree structure when attacking this problem.
\section{An Experimental Case Study}
\label{sec:experiments}
\begin{table*}[htb]
\begin{center}
\begin{tabular}{||l|r|r|r|r|r||} \hline \hline
Method&Improvement&\multicolumn{2}{c|}{Initial computation}&\multicolumn{2}{c|}{Update}\\ \hline
&&Time (s)&\# Messages&Time (s)&\# Messages \\ \hline
Newton, eps = 0.001&32.7004&0.22&9838&0.11&4935\\ \hline
Newton, eps = 0.01&32.7004&0.19&8851&0.085&3948\\ \hline
Newton, eps = 0.1&32.7001&0.17&6885&0.065&2961\\ \hline
Newton, eps = 0.2&32.6975&0.13&5906&0.065&2961\\ \hline
Newton, eps = 0.3&32.6034&0.11&4919&0.044&1974\\ \hline
\textsc{CoTree}, 200 int&32.7004&2.58&1000&0.050&1 \\ \hline
\textsc{CoTree}, 100 int&32.6992&1.54&1000&0.030&1 \\ \hline
\textsc{CoTree}, 50 int&32.6963&0.99&1000&0.019&1 \\ \hline
\textsc{CoTree}, 25 int&32.6647&0.77&1000&0.015&1 \\ \hline
\textsc{CoTree}, 10 int&31.7043&0.66&1000&0.013&1 \\ \hline \hline
\end{tabular}
\end{center}
\caption{\em Simulation results with 1000 children. \textsc{CoTree} requires significantly less communication than the Newton algorithm, in particular for updates.}
\label{tb:performance}
\end{table*}
\noindent
Above we argued that the \textsc{CoTree} algorithm is in most cases superior
to available algorithms from a communication point of view. We also argued that the algorithm has good asymptotic behavior with respect to internal computation. However, in practice, constant factors may be as important as asymptotic behavior. Therefore, we give some performance measures of \textsc{CoTree} and compare them to
corresponding figures for a Newton algorithm.
In our application area, one host can have at most something like 1000
children to manage, and we investigate a computation of this size. (Thus, for our application we get an upper limit of the total computation time from multiplying the computation time of one host with $1000$ children by the height of the host tree.) For the experiments we let the agents hold
simple exponential utility functions and act
competitive\footnote{The utility functions are $u({\mathbf x}) = ae^{-bx_{1}}+x_{2}$,
with randomly generated endowments and parameters $a$, and $b$. Hence, the price
function for each agent will be $p_1(z_1) = -abe^{b(z_1+e_1)}$, where $e_1$ is the endowment (i.e. initial allocation) of commodity $1$. Further, we let $0
\leq x_1 \leq 3$ and $x_2$ be without boundaries.}. We compare the
performance of the resource-oriented \textsc{CoTree} algorithm as described
in Section~\ref{sec:resOverview} with the performance of a
resource-oriented Newton scheme using a \textsc{Relax} approach~\cite{Ibaraki:88a} for
managing boundaries as described in \cite{Ygge:96a}. In the Newton scheme the
agents submit bids consisting of their current price and its
derivative (for details refer to \cite{Ygge:96a}). The result of the
comparison is presented in Table~\ref{tb:performance}. The improvement
column is a measure of how good the result is. Since we know, for this case,
that the market equilibrium maximizes the total utility of the system~\cite{Ygge:97MAAMAW}, we
can measure how close to the equilibrium the solution is by observing the total utility. The improvement
is simply the difference between the total utility after the first
reallocation and the initial total utility. The next column tells how long
the execution time\footnote{The algorithms were implemented in C++ and run on a Pentium Pro 200MHz PC.} was for computing the equilibrium from the initial
endowments, and the column after that describes how many messages were
required. Note that the number of messages is not necessarily a multiple of
1000 because of how the \textsc{Relax} algorithm works. If an agent is
outside its boundary after one iteration it will not be a part of the next
iteration.
Then we also illustrate the discussion given in
Section~\ref{sec:singleChange}; the rightmost columns show the computation
time and the number of messages required when the preferences of one agent
changes. For this situation we have assumed that the change does appear in
one of the leaves of \textsc{CoTree}. As discussed in
Section~\ref{sec:singleChange}, if we have prior knowledge about the frequency
with which the preferences change for different agents and how much this
affects the market, we can design the topology of \textsc{CoTree} to take
advantage of this. The results presented in this experiment thus represent the upper limit of the computation time of an update.
From Table~\ref{tb:performance} we see that when using 25 intervals with
\textsc{CoTree} the ratio of execution time between \textsc{CoTree} and a
Newton scheme with a corresponding accuracy is somewhere around six. This
is a positive result since it means that \textsc{CoTree} is not \emph{that}
much slower than the Newton scheme. The result of a corresponding comparison for the case where only one agent changes its preferences, once the initial equilibrium has been computed, is that \textsc{CoTree} is now approximately four times \emph{faster}.
Our experiments strongly indicate that \textsc{CoTree} performs well compared to a resource-oriented Newton scheme. At the same time it should be emphasized that the computation
time when being of the magnitude described above is normally
\emph{completely neglectable}. For example, in our application area,
power load management, the time for computing the allocation with $1000$
children and $25$ intervals ($0.77$s) is in the same order of magnitude as the
time required for sending \emph{one} message on the electrical power line
with today's communication systems. Therefore, we argue that when the computation time is of this order of magnitude, it is really the number of messages that is the interesting performance measure, and from this perspective \textsc{CoTree} (and any other method that saves entire preference functions from previous rounds) has considerable advantages.
Another important remark is that \textsc{CoTree} is
not dependent of the derivative of the price and hereby useful for a wider class of
problems. Furthermore, the execution time of \textsc{CoTree} is virtually
independent of the shape of demand functions, as long as $z$ is
monotonically decreasing and continuous, while the convergence speed of the Newton scheme is
heavily dependent on those shapes. Therefore it is likely that
\textsc{CoTree} is preferable to the Newton scheme in many situations,
even when every agent is run on a single host, and no inter-host communication is present.
\section{Managing the Multi-Commodity Case}
\label{sec:multi}
\noindent
So far we have only demonstrated the search for equilibrium in a market with two
commodities. Even though this setting is useful for some
realistic problems, e.g. \cite{Kurose:89a,Ygge:96a,Ygge:97MAAMAW}, it is of interest to
investigate how \textsc{CoTree} can be used for the multi-commodity case.
The two-commodity case can be extended to the multi-commodity case in a
variety of ways. One way to perform such an extension is to compute the
equilibrium for one market (one market per commodity) at a time. This
procedure is repeated until every market is in equilibrium. Note that
during this process the equilibrium for one market may have to be computed
several times, since there normally are interdependencies between the
markets and a change of the price in one market will effect the price
on another. For a detailed discussion on the convergence of this scheme refer to~\cite{Cheng:97a}. We note that if the
different markets are loosely coupled, in the sense that just a few agents
act on several markets, \textsc{CoTree} will perform very good
compared to the alternatives. In this case only the preferences of a few agents
change in each market, and we will benefit from the advantages of
\textsc{CoTree} as described in Section~\ref{sec:singleChange}.
On the other hand, if the markets are not loosely coupled, the approach of
treating every market separately will not lead to very high performance
(neither with \textsc{CoTree} nor with any other algorithm for finding the
equilibrium of each market). Even though the separation in terms of markets
allows for some distribution of
the search for equilibrium, the auctioneers of the separate markets must
come to consensus regarding whether or not the general equilibrium has been
reached. Then it is not clear that the gain of decentralizing the
computation is that large after all. It might be the case that the markets
for the different commodities are inherently distributed,
but then it does seem more reasonable that the resource is reallocated
every time a partial equilibrium is reached, i.e. we would have repeated
tatonnement processes in each market, but a non-tatonnement search for the
general equilibrium. Furthermore, Flecther \cite[p.~18-19]{Fletcher:87a}
argues that decomposing this kind of search into separate searches for
each variable is "usually very ineffective" and that these "\ldots\
early \emph{ad hoc} approaches are gradually falling out of use". Examples
of alternative algorithms are multi-variable Newton algorithms with
variable step size, were the search for the equilibrium prices is performed
in parallel~\cite{Press:94a,Fletcher:87a}. However, the drawbacks of
current algorithms, as were described in Section~\ref{sec:drawbacks}, hold for the multi-commodity
case as well.
As part of current work we are expanding \textsc{CoTree} in new ways for
the multi-commodity case. It is a delicate task and it seems hard to find a
general approach such as the one described here for the two-commodity case.
Rather we believe that for certain applications one can find efficient
algorithms if heuristics about, e.g., the relations between the demands of
the different commodities, are utilized. (For example, in our own area of
power load management, if the consumption for the current and future time
slots are treated as the different commodities, the coupling between
adjacent slots is much stronger than the coupling between slots far apart.)
%\section{\textsc{CoTree} and Standard Resource Allocation}
%\label{sec:resAlloc}
%
%\noindent
%Finally, we include a brief discussion
%on how \textsc{CoTree} may be used to solve the resource
%allocation problem defined by
%\begin{equation}
% \begin{array}{ll}
% \max{} \sum f_{i}({x_{i}})
% \\ s.t. \,\,\, \sum x_{i} = X_i,
%\end{array}
%\label{eq:reaAlloc}
%\end{equation}
%and where we assume the objective functions, $f_i$, to be separable, i.e.
%that $f_i(x_i)$ is unaffected by $x_j$ if $i \neq j$.
%
%\subsection{Concave Objective Functions - A Specific Case}
%\noindent
%As pointed out before \cite{Ygge:97MAAMAW}, if all objective
%functions $f_i$ are separable and concave, we can transform the original resource
%allocation problem to a proper market exchange where the agents hold the
%utility functions $u_i(x_i,m_i) = f_i(x_i) + m_1$, and then the competitive
%market equilibrium fulfills the Kuhn-Tucker conditions and is hence an
%optimal solution. Thus, we can use the approach described in
%Section~\ref{sec:cotree} to solve the resource allocation problem when the
%objective functions, $f_i$, are separable and concave.
%
%Compared to compeeting methods we believe \textsc{CoTree} to be very useful for
%this kind of problems, mainly because of the low communication demands, the
%low demands on information of $f_i$, and the dynamic properties as
%described in Section~\ref{sec:dynamics}. Furthermore,
%\textsc{CoTree} can be extended to cope with cases with
%non-concave objective functions as well.
%
%\subsection{Non-Concave Objective Functions - The General Case}
%\label{sec:nonConcave}
%\noindent
%If the objective functions are non-concave, we are faced with a
%much harder optimization problem. However, it is well known
%that if the objective functions are separable functions over the
%integers, the resource allocation problem
%can be solved with dynamic
%programming in time which is polynomial in
%the number of users of the resource and the size of the total resource.
%More precisely, if the number of users is $n$ and the
%total resource is $R$, the optimal allocation can be
%computed in ${\cal O}(n R^2)$ time~\cite{Ibaraki:88a}.
%
%An obvious idea for how to use this method also when the objective
%functions are defined over the real axis would be to sample the
%functions at sufficiently small intervals.
%In other words, we select a small
%unit resource and decide to only allocate entire units.
%If the unit is chosen small enough, we may hope that our
%solution to the integer allocation problem will be a good
%approximation of the solution to the original problem.
%However, when the number of users gets large, we run into problems
%also with this approximation scheme.
%If the total resource consists of $R$ units and there
%are $n$ users, each user will on the average get $R/n$ units
%allocated. If the user's individual objective function is to
%be meaningful, the unit must be small enough so that
%an interval of size $R/n$ is covered by a fairly large
%number of samples. Say that we wish a local objective function
%to be sampled at $k$ values. This implies that $R/n \geq k$.
%As an example, assume that we have one million users and that a local
%objective function should be sampled with 100 values.
%Then, $R \geq kn = 10^8$. Now, the expression $n R^2$ within
%the big-${\cal O}$ notation above becomes $10^{22}$, which gives an impractical execution time.
%
%A way to get around this by some further approximation would be
%to use large resource units at the global level and small units
%at local levels. This is essentially what we did with the \textsc{CoTree}
%algorithm presented in Section~\ref{sec:cotree}; at each node in the tree we used a fixed
%($k$) number of sample values to represent our aggregated functions. There is a number of ways how one can %do the actual aggregation, and we sketch one here as an example.
%
%We start by describing how the computation is performed on a single node. Assume that we have two users $x$ %and $y$ which can be allocated
%resources in the intervals $[x_0,x_1]$ and $[y_0,y_1]$, respectively.
%Furthermore, assume that both objective functions are approximated
%by $k$ sample points.
%The aggregated objective function covers the interval
%$[x_0+y_0, x_1+y_1]$ and we wish to produce an approximation
%of $k$ sample values in this larger interval. We simply divide the
%interval into $k$ subintervals of equal length. Next, we try all
%$k^2$ combinations of $x$'s and $y$'s sample values. Each combination
%fits within one of the intervals (ties are broken arbitrarily) and
%for each interval we keep track of the combination with the highest
%total value, and how much was allocated to $x$ and $y$ for each combination. Thus, after all combinations %have been tried, we have produced a sampled representation of the aggregated objective function, where each %slot in the $objective$ vector holds the highest value of the sum of $x$'s and $y$'s objective function within the %given interval, and the $leftAlloc$ and $rightAlloc$ vectors holds the corresponding amounts of resource %allocated to $x$ and $y$. If it turns out that any slot of $objective$ is empty after the computation, it is filled %with an interpolation from adjacent intervals.
%
%Compared to the resource-oriented \textsc{CoTree} discussed in Section~\ref{sec:cotree}
%there are three main differences. First, instead of a price/demand
%function we represent an objective function. Second, the computation
%cost for computing the aggregated function of two agents has increased
%from ${\cal O}(k)$ to ${\cal O}(k^2)$. Third, each slot in the $objective$ vector (corresponding to $price$ vector in
%Section~\ref{sec:compAuct}) does not correspond to a given amount of resource. It holds the highest sum of %the objective functions within a given interval. The exact amount it corresponds to is obtained by summing the %corresponding slots of $leftAlloc$ and $rightAlloc$. When the aggregated objective function is computed and %sent to the next level, not only the $utility$ vector is sent, but also a vector telling how much resource each slot %in the utility vector corresponds to.
%
%The computation of the global objective function is computed bottom-up (as the price function was computed %in Section~\ref{sec:cotree}). Once a node has computed its objective function, it is sent to its parent which, in %turn, computes its objective function and sends it to its parent until the root node is reached.
%
%We now take a look at the total computation time through using the same examples as before, the case where %$d=n$ and where $d=2$ and every agent has their own host. If $d=n$, the total computation time will be the %computation time of each "auctioneer" times the number of "auctioneers", i.e. ${\cal O}(n k^2)$. If $d=2$ and every %agent has their own host the total computation time will be the computation time of each "auctioneer" times the %height of the tree, i.e. ${\cal O}(k^2 \log n)$. Changing the objective function of a single agent would require
%recomputation along a single path down the tree; the cost for this is ${\cal O}(k^2 \log n)$, regardless of whether or %not the agents are distributed or located on one host.
%
%As one more example if $d=1000$ and there are 1 million users of the resource, we will have a tree of height %$2$. Since the computation time is ${\cal O}\left( \frac{\log n}{\log d}k^2 \right)$, the expression $\frac{\log n}{\log %d}k^2$ within
%the big-${\cal O}$ notation above becomes $2 \cdot 1000 \cdot k^2$. Even if we allow a $k=100$, the computation %time will merely be the time for a number of basic instructions, such as comparisons, times $2 \cdot 10^7$, %which certainly is tractable. If we would consider this time two long and we can set $d=100$ and hence increase %the depth of the tree by one. Now the computation time is in the order of $3 \cdot 10^6$. And if we now %reduce the number of intervals to $25$ we are down to the order of $2 \cdot 10^5$. As before, no iterations are %required in the search for the (approximated) optimal solution, and \textsc{CoTree} will be communication %effective, especially when only a few objective functions changes.
%
%\subsection{Mixing the Approaches}
%\noindent
%From the above we see that the computations required for the non-concave case
%is considerably slower than the computation required for concave case. Hence, if we know that the objective %functions are concave
%we choose the method for concave objective functions and one for non-concave objective functions otherwise.
%If we are not sure whether or not the objective functions are concave we would like
%\textsc{CoTree} to select method for us. This can be done in a straight forward manner. Furthermore, we can %let \textsc{CoTree} choose different strategies at different nodes. While the $objective$ vector is computed for %a node it can
%simultaneously be tested if the aggregated objective function is concave or not. Thus, a
%children can notify its parent whether or not its objective function is concave,
%requiring only one extra bit of information and very little computation. A node having two concave
%children which minimal allocation are equal (for some given tolerance), can
%use the method for concave objective functions.
%
%We strongly believe that for many practical applications,
%e.g. energy management, it is the case that even when local objective
%functions are not concave, aggregated ones at a certain level are (for some given concavity measure and %tolerance). It seems clear that if many of the local objective functions have high derivatives when they are %assigned a small amount of resource and if many of the local are satiated at some point (i.e. will have a low %derivative above a certain amount of allocated resource), the aggregated objective function will have a high %derivative for low allocations and will come to satiation at some point. Intuitively it also makes sense that for %the values in between, large increases in local objective functions are taken advantage of as early as possible. %This suggests that the aggregated objective function strives towards concavity as more local objective functions %are being included. We show two examples to demonstrate that there are instances where the local objective %functions rapidly aggregate to a concave objective function, even when these local function appear to be "ill-%shaped".
%
%In the first example, the local objective functions were chosen so that they had significantly high derivatives for %small amounts of resource and that they were satiated at some point. In the interval in between they were %deliberately chosen to be "ill-shaped". The two objective functions are shown in Figure~\ref{fig:twoobj}.
%
%%-----------------------
%\begin{figure}[!hbt]
%\epsfxsize=2.5in
%\hspace*{\fill}
%\epsffile{figs/twoobj.eps}
%\hspace*{\fill}
%\caption{{\it Two local objective functions and the corresponding compound objective function.}}
%\label{fig:twoobj}
%\end{figure}
%%-----------------------
%
%The objective functions have been sampled through $11$ samples $(0 ,1 , \ldots\ , 10)$. The compound %objective function was obtained by combining all solutions of the children and keeping the best for each %resource $0-20$, i.e. the optimal solution to the corresponding integer problem. It is noteworthy how much %"more concave" the compound function is than the two functions of the children. An even more interesting %illustration is given in Figure~\ref{fig:sextobj}. Here the optimal integer solution for eight users holding the %objective function of the left child above and eight users holding the one of the right child. We see that the %compound objective function can be very accurately approximated by a concave function.
%
%%-----------------------
%\begin{figure}[!hbt]
%\epsfxsize=2.5in
%\hspace*{\fill}
%\epsffile{figs/sextobj.eps}
%\hspace*{\fill}
%\caption{{\it The compound objective function of $16$ users.}}
%\label{fig:sextobj}
%\end{figure}
%%-----------------------
%
%It turns out that one can find examples of local objective functions that have even worse appearance than the %two above and still get a fast aggregation to a concave objective function. Our second example is shown in %Figure~\ref{fig:striking}. The method for aggregation is the same as the one in the first example. Here the %objective functions are chosen such that they are \emph{decreasing} for low resources and \emph{increasing} %at their maximum. Still the compound objective function for $16$ users is close to a concave function. The main %deviations are in the regions close to the lower and upper limits.
%
%%-----------------------
%\begin{figure}[!hbt]
%\epsfxsize=2.5in
%\hspace*{\fill}
%\epsffile{figs/striking.eps}
%\hspace*{\fill}
%\caption{{\it Aggregation of objective functions that decrease for low resources and increase for high %resources.}}
%\label{fig:striking}
%\end{figure}
%%-----------------------
%
%We conjecture that the above phenomena can be theoretically verified for very general local functions under %some reasonable assumptions.
%
%Since the cost for performing the concavity test is low, there are very good reasons to perform the test when %implementing \textsc{CoTree} for non-concave problems. In summary, we believe that taking advantage of the %"aggregation towards concavity" will be very useful in practice.
%
%When every node of \textsc{CoTree} is run on a single host the gain of the method in terms of reduced %computation time is very limited since the majority of all nodes are in the lower levels of the tree. Even if the %objective function can be treated as concave already after $4$ levels the concave search would only be %performed in approximately $\frac{1}{16}$ of all nodes. On the other hand if we have one million users and %$d=100$ (a tree of hosts of height $4$, where each host would have a tree of "auctioneers" of height $8$) and %we can utilize the concave method in three of the four layers of hosts, the gain in computation time would be %significant, especially for high $k$s. In such settings we might afford to have a quite high $k$ in the upper levels %of the host tree.
\section{Conclusions}
\label{sec:conclusions}
\noindent
In this paper we reported difficulties with using available algorithms in
distributed computational markets. We introduced a novel algorithm
\textsc{CoTree} which:
\begin{itemize}
\item requires minimum information for computing the equilibrium (i.e.
certain derivatives as used with Newton methods are not required). It performs well regardless of if input is demand or price functions.
\item is communication effective.
\item performs excellent when responding to changes of e.g. single agent's
preferences.
\item scales up nicely, also to huge markets. When run in a totally distributed environment the execution time is ${\cal O}(k \log n)$, where $k$ is the number of samples in the price or demand function and $n$ is the number of consumers/producers. Even more important, the longest communication chain is ${\cal O}(\log n)$, the number of messages is for a total recomputation is ${\cal O}(n)$, and the size of each message is $\Theta (k)$. If only the preferences of one agent changes, the required number of messages is ${\cal O}(\log n)$.
\item has a number of advantages over Newton algorithms even when run on a
single host.
\end{itemize}
As shown in this paper \textsc{CoTree} can be seen as an algorithmic framework, rather than a specific algorithm. There are still issues to be further investigated. In our opinion, one of the most interesting is how one should choose the number of samples (denoted $k$ throughout the paper) on different levels in the system, depending on the computational power at those levels as well as heuristics of the price/demand functions.
We have also implemented a variant of the \textsc{CoTree} algorithm which is well suited for standard resource allocation, even with non-concave objective functions. A publication describing this will soon be made available.
\bibliography{ygge}
%\bibliographystyle{unsrt}
\bibliographystyle{named}
%\bibliographystyle{plain}
\appendix
\section*{Appendix: Details of the \textsc{CoTree} Algorithm for the Resource-Oriented Case}
\subsection*{Structure and Communication}
\noindent
Each auctioneer keeps an approximation of the compound price
function of its descendant nodes, as well as information on how to
distribute resources between its children. This is visualized in
Figure~\ref{fig:auctStr}.
%-----------------------
\begin{figure}[!hbt]
\epsfxsize=3in
\hspace*{\fill}
\epsffile{figs/auctstr.eps}
\hspace*{\fill}
\caption{{\it The structure of an auctioneer.}}
\label{fig:auctStr}
\end{figure}
%-----------------------
The $price$ vector holds $k$ values, each value being the marginal
price that the auctioneers children have at that change in resource, $z$.
Thus, $price$ gives the price at sampled points from the
situation where the auctioneers children sells as much of the resource as
possible to the situation where they buy as much as possible\footnote{For
applications where there is no limit to how much an agent can buy, good
guesses of a possible interval can be used. If the allocation should end up
at the border of this interval, new limits can be set and only a small part
of the equilibrium computations will have to be redone.}. As an example we
see that for $z=1$ (the third slot), we have that $p=2$. The $leftAlloc$
and $rightAlloc$ vectors are also
holding $k$ values corresponding to the respective equilibrium demands. For
example, if the auctioneer is assigned $z=1$ (i.e. will buy $1$ unit of
commodity one) after the equilibrium computation, it gets directly from the
$leftAlloc$ and $rightAlloc$ vectors that $0.5$ will be sold by the left
child and that $1.5$ will be bought by the right child. If the auctioneer
is assigned $z=0.5$ it interpolates between adjacent values, and in this
case it would mean that the left child would sell $0.75$ and the right
child would buy $1.25$.
\textsc{CoTree} is implemented on a set of hosts interconnected in a tree structure. The height of this tree will vary from zero (when all auctioneers, consumer and producers reside on the same host) to the height of a binary tree when every auctioneer, consumer and producer resides on a separate host. We introduce a degree, $d_{i}$, defined as the number of children of host $i$. The choice of $d_i$ for each host will depend on the communication
and computation capacity of the employed hardware. If we assume that
all hosts have the same degree $d$ (which is not necessary) and the tree is well balanced, the height of the
tree is ${\cal O}\left( \frac{\log{n}}{\log{d}} \right)$. Another important design parameter is $k$, the number of elements in the $price$, $leftAlloc$ and $rightAlloc$ vectors. The choice of $k$ will depend on the tolerance on the quality of the result (\emph{cf.} the discussion in Section~\ref{sec:precision}) as well as the computer system's communication and computation capacity. Throughout the paper we assume $k$ to be equal for every consumer, producer and auctioneer, although this is not necessary. We now take a look at two extreme cases of $d$.
As one extreme, we chose $d=n$, where $n$, as above, is the number of
leaves in the tree, i.e. the number of consumer and producer agents on the
market. This results in a
host tree of height 1 where there are no internal hosts,
only a set of $n$ leaves
and the root. We let each agent deliver all its $k$ sample
values to the root. The global auctioneer then computes
an aggregated function. This requires that the global
auctioneer receives $n$ messages, each of size $\Omega(k)$.
A host tree of height 1 works well as long as the number
of agents is not too large. For a large number of
agents, the communication to the root
may become a bottleneck and the computational burden of the host holding
the root node may become too high (see further the Appendix).
As the other extreme case we choose $d=2$ which gives a binary host tree, and let each host hold only one consumer, producer or auctioneer.
The computation is now distributed among the hosts so that
the root---as well as each internal host---only
need to process two messages of size $\Omega(k)$.
This minimizes the cost for communication (and computation)
at each single host at the expense of more messages ($2n-2$ compared to $n$)\footnote{The number of messages is equal to the number of edges in the host tree. If we assume a balanced host tree with no unary nodes, the number of edges in a binary tree with $n$ leaves is $2n-2$.} and slightly longer
communication chains; the height of the tree will be ${\cal O}(\log n)$.
\subsection*{The Computations of an Auctioneer}
The basic structure of the algorithm is given below. The algorithm is explained in some detail
after the pseudo-code.
{\footnotesize
\begin{verbatim}
if the node only has one child then
copy the vectors of that child
else
assign the highest p of the two children
to slot 0 in price and assign the
minimum values of the respective children
to the first slots in leftAlloc and rightAlloc
for i:= 1 to k-1 {
//This loop is run k-1 times
assign as much as possible to
the right child
if the price of the right child is larger
or equal to that of the left child then {
if the right child is at its boundary then
assign the value of the left child to slot i
in price
else
assign the value of the right child to slot i
in price
assign the current allocations to leftAlloc
and rightAlloc
continue with the next iteration
} //end if
assign as much as possible to the
left child
if the price of the left child is larger
or equal to that of the right child then {
if the left child is at its boundary then
assign the value of the right child to slot i
in price
else
assign the value of the left child to slot i
in price
assign the current allocations to leftAlloc
and rightAlloc
continue with the next iteration
} //end if
interpolate to the point where the
prices are expected to be equal
assign price, leftAlloc and
rightAlloc as described below
} //end for
\end{verbatim}
} %footnotesize
If the node only has one child, the obvious solution is simply to copy
its vectors. Otherwise we have to calculate the $k$
values for the three vectors as described below.
The algorithm is most easily explained through
an example. Assume that the $price$ vectors sent by the children to the
auctioneer are as in Figure~\ref{fig:margPric}. (The numbers in
Figure~\ref{fig:auctStr} are from the computation with these price
samples.) Note that $-1 \leq z_{left} \leq 1$ and $0
\leq z_{right} \leq 2$, and hence $-1 \leq z \leq 3$ for the
auctioneer.
%-----------------------
\begin{figure}[!hbt]
\epsfxsize=2.5in
\hspace*{\fill}
\epsffile{figs/margPric.eps}
\hspace*{\fill}
\vspace{-1in}
\caption{{\it The prices of two children.}}
\label{fig:margPric}
\end{figure}
%-----------------------
We start by assigning the highest value of slot $0$ of the price vectors of the two children to slot $0$ of $price$ of the auctioneer. This is because the child with the highest price will start to buy at the auctioneers
minimal $z$. Hence, $4$ is filled in at slot $0$, since $p=3$ for the left
child and $p=4$ for the right child. The minimum values of the left and right children are filled in at slot $0$ of $leftAlloc$ and $rightAlloc$ respectively.
Then we enter the \texttt{for} loop. For each
$i=0 \ldots\ k-1$, we test the allocation $z =min+\frac{i}{k-1}(max-min)$.
First assign as much of $z$ as possible to the
right child. Assigning as much as possible to the right node means that the
maximum value for that child can not be exceeded and that the value
assigned to the left child can not be smaller than the value assigned to it
in the previous iteration. In the first iteration
this means that we start by assigning $1$ to the right child and $-1$ to
the left child. In this case $p=3$ for both children and hence the
condition (that the price of the right child is higher than or equal to the price of the left child) is true and $3$ will be assigned to slot $1$ in $price$, $-1$
will be assigned to slot $1$ of $leftAlloc$, and $1$ will be assigned to
slot $1$ of $rightAlloc$. Then we continue with the next iteration.
In the next iteration we start by assigning $2$ to the right child and
$-1$ to the left child. This time the condition (that the price of the right child is higher than or equal to the price of the left child) is false and we
continue with assigning $1$ to the right child and $0$ to the left. As the
price of the left child is smaller than the one of the right
child, we continue to the interpolation. By now we have computed that
$p_{right}(2) = 1$, $p_{right}(1) = 3$, $p_{left}(-1)=3$, and
$p_{left}(0)=1$. We now do a linear interpolation to estimate where the
prices are equal. Thus, we set up the equation
$$\begin{array}{l}
p_{right}(1)+\Delta z_{right}*\frac{p_{right}(1)-p_{right}(2)}{1-2}= \\
p_{left}(-1)+(1-\Delta z_{right})*\frac{p_{left}(-1)-p_{right}(0)}{-1-0}.
\end{array}$$
Solving the equation
gives $\Delta z_{right} = \frac{1}{2}$ and the resulting price
($p_{right}(1)+\frac{1}{2}*\frac{p_{right}(1)-p_{right}(2)}{1-2}=2$) is
assigned to slot $2$ in $price$, $-0.5$ is assigned to slot $2$ in
$leftAlloc$ and $1.5$ is assigned to slot $2$ of $rightAlloc$.
Correspondingly is done for $i=3$.
For the last iteration ($i=4$) we have that the first if statement is true
and that the right child is at its boundary, thus we assign $0.25$ to
$price$, $1$ to $leftAlloc$ and $2$ to $rightAlloc$. The rationale
behind selecting the price of the other node when one of them is
at its boundary is that once the boundary has been reached, this child can
not buy anything more, and it will not sell before the price is above the
price of the other child.
In this simple example the $price$ exactly matched prices the resources
requested for. In general this is not the case. Rather the price must be
interpolated from adjacent values.
As seen from the pseudo-code above, for each of the possible allocations,
the worst case is that both the if-statements return false and we have to
perform the interpolation which is done in constant time. Hence, the
computation time grows with the number of samples as ${\cal O}(k)$.
Looking at the total computation time of \textsc{CoTree} we again investigate the cases of $d=n$ and $d=2$. When $d=n$ (all auctioneers reside on one host) the computation time is the computation time of each auctioneer times the number of auctioneers, i.e. ${\cal O}(nk)$. When $d=2$ and we let each host hold only one auctioneer, the computation time is the computation time of each auctioneer times the height of the tree, i.e. ${\cal O}(k\log n)$. It is important to remember though that the communication, rather than the computation is the important performance measure here. (See further Sections~\ref{sec:experiments}.)
\end{document}