WebAssembly is a new W3C standard, providing a portable target for compilation for various languages. All major browsers can run WebAssembly programs, and its use extends beyond the web: there is interest in compiling cross-platform desktop applications, server applications, IoT and embedded applications to WebAssembly because of the performance and security guarantees it aims to provide. Indeed, WebAssembly has been carefully designed with security in mind. In particular, WebAssembly applications are sandboxed from their host environment. However, recent works have brought to light several limitations that expose WebAssembly to traditional attack vectors. Visitors of websites using WebAssembly have been exposed to malicious code as a result.
In this paper, we propose an automated static program analysis to address these security concerns. Our analysis is focused on information flow and is compositional. For every WebAssembly function, it first computes a summary that describes in a sound manner where the information from its parameters and the global program state can flow to. These summaries can then be applied during the subsequent analysis of function calls. Through a classical fixed-point formulation, one obtains an approximation of the information flow in the WebAssembly program. This results in the first compositional static analysis for WebAssembly. On a set of 34 benchmark programs spanning 196kLOC of WebAssembly, we compute at least 64% of the function summaries precisely in less than a minute in total.
Keywords: Static program analysis, WebAssembly, security
WebAssembly [1] “is a binary instruction format for a stack-based virtual machine” [2], designed as a compilation target for high-level languages. The specification of its core has been a W3C standard since December 2019 [3]. WebAssembly was designed for the purpose of embedding binaries in web applications, thereby enabling intensive computations on the web. Current trends however show that the usage of WebAssembly goes beyond web applications. It is now used for cross-platform desktop applications, thanks to its ability to easily incorporate functions that are provided by the runtime. In particular, the WebAssembly System Interface (WASI) [4] is an API that focuses on providing such functions to WebAssembly applications, in order to deal with files and networking. As long as the proper functions are provided, WebAssembly can also be used for IoT backends [5] and for embedded systems [6]. Finally various runtimes are being developed for WebAssembly [7]–[12].
WebAssembly is quickly gaining in popularity: a 2019 study [13] demonstrated that 1 in 600 websites among the top 1 million Alexa websites rely on WebAssembly. However, the same study revealed an alarming finding: in 2019, the most common application of WebAssembly is to perform cryptojacking, i.e., relying on the visitor’s computing resources to mine cryptocurrencies without authorisation. Moreover, despite being designed with security in mind, WebAssembly applications are still vulnerable to several traditional security attacks, on multiple execution platforms [14].
Consequently, there needs to be proper tool support for preventing and identifying malicious usage of WebAssembly. There has been some early work on improving the safety and security of WebAssembly, e.g., through improved memory safety [15], code protection mechanisms [16], and sandboxing [17]. Also, dynamic analyses have been proposed for detecting cryptojacking [18], [19] or for performing taint tracking [20], [21]. However, not a single static analysis for WebAssembly has been proposed so far.
In this paper, we aim to fill this void by providing the foundations for a method of static analysis tailored to WebAssembly. A number of challenging problems need to be overcome. First, traditional whole-program analyses may not be used , as WebAssembly binaries are often libraries of which the usage is not known statically. Second, static analysis of binaries is challenging due to their low-level nature [22], [23]. Finally, for the sake of applicability, the analysis of WebAssembly binaries has to be fast and automated so it can be incorporated by runtimes to identify potential security problems before running the application under analysis.
To address these challenges, we turn to a compositional, summary-based analysis method [24]. In such an analysis, code segments are analysed in isolation from each other and the analysis computes a summary of their individual behaviour. Our analysis aligns code segments with WebAssembly functions, and computes summaries that describe how information flows across the execution of functions. We deliberately do not model flow through the linear memory of WebAssembly nor traps as part of the summaries, and we focus on explicit information flows. We observe that, in practice, this does not result in soundness issues on 34 benchmark programs.
This paper makes the following contributions:
We present the first compositional static analysis for WebAssembly, in the form of a static information flow analysis. This analysis is compositional in nature, and derives function summaries. These summaries approximate the information flow within a function, and are used to conduct an inter-procedural analysis in a compositional manner.
We demonstrate how a static analysis for WebAssembly can benefit from the design of WebAssembly to statically over-approximate indirect function calls.
We implemented our analysis within a framework that we will open source. We evaluated our analysis on 34 WebAssembly programs, totalling 196kLOC, which were analysed in less than 2 minutes, with a precision of 64%.
In this section, we motivate the desiderata for any static analysis of WebAssembly and highlight the peculiarities of WebAssembly that distinguish it from other assembly languages in terms of static analysis support.
Safety has been a focal point in the design of WebAssembly, as mentioned in the documentation: “the design of WebAssembly promotes safe programs by eliminating dangerous features from its execution semantics”[25]. However, suitability as an efficient compilation target for languages such as C and C++ has received equal consideration. A linear memory model [15] is the key feature to which WebAssembly owes this suitability, meaning that memory is represented as an array of bytes that can be read from and written to without limitations. To ensure safety, WebAssembly programs —and their linear memory— are isolated from the host execution environment: an incorrect manipulation of the linear memory cannot result in arbitrary code being executed on the host execution environment. However, they are still prone to the usual vulnerabilities such as buffer overflows within their own execution. Similarly, although the isolation of the WebAssembly executable limits the possibilities for remote code execution, it is feasible [26] nonetheless through a combination of indirect function calls, buffer overflow, and code executing functions such as Emscripten’s emscripten_run_script
which can run arbitrary JavaScript code on the browser. Lehmann et al. [14] have even demonstrated that WebAssembly binaries are vulnerable to various attacks, and that this is the case on several execution platforms.
To address the increasing security concerns, WebAssembly refinements such as improved memory safety [15] and two-way sandboxing [17] as well as code protection mechanisms [16] have been proposed. The first dynamic analyses for vulnerability detection have also been proposed, against cryptojacking [18] and unwanted information flow [20], [21]. To this date, however, no automated static analysis has been proposed to protect the browser or WebAssembly runtime from executing malicious instructions.
WebAssembly is a binary instruction language by definition. For the browser or runtime executing WebAssembly instructions, the original source code of the compiled program is no longer available. On the one hand, this is unfortunate as source code can often be analysed with higher precision as it reflects developer intentions more directly. On the other hand, analysing a smaller binary language comes with the advantage that its semantics is often well-defined. The core language specification of WebAssembly is not only a W3C standard, but it has also been mechanically formalised [27]. Notably, the WebAssembly specification does not contain any undefined behaviour.
Static data flow analysis of x86 binaries is notoriously difficult, mostly due to the difficulty of constructing the required Control Flow Graph (CFG) in the presence of arbitrary jump instructions. Several data flow analyses [23], [28]–[30] dedicate a separate and complex analysis to this task, or have to refine a pre-computed CFG during their own analysis [31]. The design of WebAssembly alleviates this challenge. Branching instructions unambiguously identify the targets of their jumps, either as a single target (with the br
instruction), two targets (br_if
), or a static list of targets (br_table
). Moreover, function calls —another form of branching— come in two forms: the \(\texttt{call }i\) instruction calls the function identified by index \(i\) (statically defined), while the call_indirect
\(t\) instruction calls the function of type \(t\), of which the index is at a location in a statically defined table, given by the value on the top of the stack. Hence, the call_indirect
instruction is the only unknown in the control flow of WebAssembly. By relying on type information, the targets of an indirect call can be approximated, enabling the construction of an approximate call graph before analysis.
A WebAssembly module is defined as a set of functions, some of which are exported and made available to the host execution environment, some of which are imported and are made available by the host execution environment to be called from within the WebAssembly module. This bi-directional interface enables replacing JavaScript libraries with more efficient binaries, which is a prevalent practice as Musch et al. found that 39% of the top 1 million Alexa websites that rely on WebAssembly do so in the form of binary libraries [13].
Hence, any whole-program analysis would require knowing how a WebAssembly module is used by the client code, and needs to support analysing both client-side JavaScript code and WebAssembly code. This would be a tremendous engineering effort, and would potentially result in slow analyses, as whole-program analyses are known to face scalability issues [32], [33]. Instead, we argue that some form of compositional analysis is required. In a compositional analysis, the approximation of the semantics of an entire program is obtained by composing the approximations of the semantics of the program’s parts [24]. As a result, a compositional analysis can not only analyse whole programs, but can also analyse portions of programs from any entry point, which fits the need for analysing libraries where an entry point is a function call. Compositional analyses, and similar forms of modular analyses, have been shown to scale well [34]–[41].
A summary-based compositional analysis approximates the semantics of a program part through a summary. In our case, the program parts that are analysed in isolation from each other are the functions of the WebAssembly program under analysis. The information maintained in the summary for a function needs to be sufficiently precise to support the purpose of the analysis. To enable determining unwanted information flow, a function summary should at least provide an approximation of what it returns in terms of its parameters and how it modifies the global variables (i.e., registers). It may also include information on how the linear memory is updated. As an example, consider the following excerpt of a WebAssembly module with one global variable. This is the definition of a function that takes one argument and returns a value. It first pushes the constant 1 on the value stack (line [ex1:line:const]), then pushes the value of local variable 0 (line [ex1:line:localget]), which is the first parameter to the function. Finally, both values are added with the binary operation add
, which reads both values from the stack and pushes the result on the stack (line [ex1:line:add]). The top value of the stack after the last instruction is the return value of the function. Hence, this function returns the value of its parameter incremented by one, and does not modify the global variable.
(type (func i32 -> i32))
(func (type 1) (local)
i32.const 1 ;; stack is: [1]
local.get 0 ;; stack is: [arg0, 1]
i32.add) ;; stack is: [arg0+1]
For this function, our summary-based compositional analysis produces the summary \((\{\textbf{parameter}(0)\}, \{\textbf{global}(0)\})\). This indicates that the return value (first element of the tuple) contains information from the first parameter of the function, and that the global variable after the execution of the function only contains information from the global variable before the execution. This summary holds regardless of the values of its parameters, the values of the global variables, and the content of the linear memory.
For presentation purposes, we introduce MiniWasm as a subset of WebAssembly that contains all defining features that should be supported by a static analysis. Our implementation is not limited to the MiniWasm subset, and supports the full WebAssembly language except for traps. We refer to the WebAssembly specification [3] and its mechanisation [27] for a formal treatment of the full language. MiniWasm retains the following defining features of WebAssembly:
32-bit integers.
Instructions for structured control flow: blocks (block
), loops (loop
), and structured conditional jumps (br_if
).
Instructions to manipulate the WebAssembly runtime: the local variables (local.get
, local.set
), the global variables (global.get
, global.set
), and the linear memory (load
, store
).
Instructions for direct and indirect function calls (call
, call_indirect
).
Instructions for unary and binary operations (unop and binop).
Imported functions that are left undefined.
Function pointer indirection through a table.
A MiniWasm module is composed of a list of function types, a set of functions, and a table that identifies function pointers. Without loss of generality, MiniWasm does not contain the following features of the WebAssembly specification:
Other datatypes: WebAssembly supports 64-bit integers, 32-bit floats, 64-bit floats, as well as various operations on these types.
A WebAssembly module may declare the initial contents of the memory as part of its definition. The contents of the memory can then be modified, either during the execution of a WebAssembly function, or by the host environment. This is not included in MiniWasm, because we do not know how the memory may be changed by the host environment, and the results of a compositional analysis need to be valid for any change to that memory.
MiniWasm does not feature exceptional control flow (traps) nor the return
instruction.
In WebAssembly, functions may have names: these are easier to read than indices when referring to a function in call instructions. Moreover, WebAssembly modules may export functions, giving them a name, such that they can be called from the host environment. MiniWasm does not contain any name.
MiniWasm only features conditional branching (br_if
), as other WebAssembly branching instructions (br
, br_table
) can be rewritten as conditional branching.
In contrast to the actual WebAssembly specification, the linear memory of MiniWasm is a map from 32-bit integer values to 32-bit integer values, and the store
and load
operations do not have an extra parameter to specify alignments, signedness, and pack size.
The syntax of MiniWasm is defined below.
\[\begin{aligned} \textit{module} &::= \texttt{(module } \textit{type}^* ~ \textit{func}^* ~ \textit{table}\texttt{)}\\ \textit{type} &::= \texttt{(type (func }\textit{ft}\texttt{))}\\ \textit{bt},\textit{ft} &::= t^* \rightarrow t^*\\ \textit{t} &::= \texttt{i32}\\ \textit{func} &::= \texttt{(func (type }\textit{tidx}\texttt{))}\\ &~|~ \texttt{(func (type }\textit{tidx}\texttt{) (local }t^*\texttt{)}~\textit{instr}^*\texttt{)}\\ \textit{table} &::= \texttt{(table }n^*\texttt{)}\\ \textit{instr} &::= \textit{data} ~|~ \textit{control}\\ \textit{data} &::= \texttt{drop} ~|~ t.\texttt{const}~n ~|~ t.\textit{binop} ~|~ t.\textit{unop} \\ &~|~ \texttt{local.get}~n ~|~ \texttt{local.set}~n \\ &~|~ \texttt{global.get}~n ~|~ \texttt{global.set}~n\\ &~|~ \texttt{load} ~|~ \texttt{store}\\ \textit{control} &::= \texttt{block}~\textit{bt}~\textit{instr}^*~\texttt{end} ~|~ \texttt{loop}~\textit{bt}~\textit{instr}^*~\texttt{end}\\ &~|~ \texttt{call}~\textit{ft}~n ~|~ \texttt{call_indirect}~\textit{ft} ~|~ \texttt{br_if}~l \\ n,l,\textit{tidx} &::= \text{a number}\end{aligned}\] A MiniWasm module (definition module in the syntax) contains:
A sequence of function type declarations (\(\textit{type}^*\)).
A sequence of function declarations (\(\textit{func}^*\)).
A table (table) that is used to identify targets of indirect function calls: the table is a sequence of function indices that may be called upon a call_indirect
instruction.
A function can either be an imported function, defined by the host environment, or a function defined in the module. A function has a type, which is declared by providing its index \(\textit{tidx}\) in the sequence of type declarations. A defined function may have local variables, and consists of a sequence of instructions. Local variables are not named but rather indexed, and are composed of first the function’s parameters, followed by the declared locals: a function with one parameter and two local variables can access local variable 0 for accessing its parameter, and local variables 1 and 2 for accessing the declared local variables. Instructions can either be data instructions, which are instructions that manipulate the stack (drop
, const), locals (local.get
and local.set
) or globals (global.get
and global.set
). Or, they can be control instructions, which influence the program’s control flow. We leave binop and unop unspecified as our analysis will not distinguish between operators. Blocks (block
) and functions are annotated with their types (respectively, bt and ft). We denote by \(\textit{bt}_{\textit{in}}\) (resp. \(\textit{bt}_{\textit{out}}\)) the input arity (resp. output arity) of a block type, and similarly for a function type. Blocks act as delimiters inside functions, for identifying jump targets. Below we illustrate MiniWasm through a few examples.
The following example demonstrates the use of structured jump instructions. The br_if
instruction is used to jump out of a number of enclosing blocks, or to jump back the beginning of a loop. Here, br_if 0
at line [ex2:line:brif] will break out of the current block (ranging from line [ex2:line:blockstart] to line [ex2:line:blockend]) if the top value on the stack is non-zero. In general, br_if n
breaks out of the n
th parent block, or to the beginning of a loop as we will see in the next example.
(type (func i32 i32 -> i32))
(func (type 1) (local)
;; this block leaves one value on the stack
block -> i32
local.get 0 ;; stack is [arg0]
local.get 1 ;; stack is [arg1, arg0]
;; breaks out of the block
;; if arg1 is non-zero
br_if 0
drop ;; stack is []
local.get 1 ;; stack is [arg1]
end)
;; final stack is [arg1] if arg1 = 0
;; otherwise it is [arg0]
The following example demonstrates the use of the loop construct and of local variables. The function takes one argument and produces one return value. To do so, it relies on two extra local variables (declared on line [ex3:line:localdecl]). The local variables are manipulated with instructions local.get
and local.set
. The local variables of a function are in fact a sequence formed by the parameters of the function followed by the declared local variables, which are initialised to 0 upon function entry. The behaviour of a break instruction in a loop
block is to jump to the beginning of the loop. In case the end of the loop is reached without breaking, the loop ends its execution. This means that in the following code, the loop iterates while local1-1
is non-zero.
(type (func i32 -> i32))
(func (type 1)
(local i32 i32)
local.get 0 ;; stack is [arg0]
local.set 1 ;; stack is []
loop
local.get 2 ;; stack is [local2]
i32.const 1 ;; stack is [1, local2]
i32.add ;; stack is [local2+1]
local.set 2 ;; stack is []
local.get 1 ;; stack is [local1]
i32.const 1 ;; stack is [1, local1]
i32.sub ;; stack is [local1-1]
local.set 1 ;; local1 is set to local1-1
local.get 1 ;; stack is [local1]
br_if 1
end
local.get 2)
The final example illustrates direct and indirect function calls. A direct function call is performed by placing the arguments on the stack (lines [ex4:line:arg1] and [ex4:line:arg2]), followed by issuing the call
instruction (line [ex4:line:call]). The function called will be the function that has the corresponding index: in this case, the function with index 0 is called. Function indices are assigned to each function by following the order in which they are defined in the module: function 0 is the first function declared in the module. The return value of the called function will be on the top of the stack after the function call. Finally, the control flow returns to the caller once the called function has finished its execution. For indirect function calls, arguments are also placed on the stack (lines [ex5:line:arg1] and [ex5:line:arg2]). Then, an index is placed on the stack (line [ex5:line:fun]) before performing the call_indirect
instruction (line [ex5:line:call]). This index will be looked up in the table (declared on line [ex5:line:table]), and the corresponding function will be called. In this case, index 0 is looked up in the table: it points to function 0, which is therefore called. Note that the call_indirect
instruction provides the type of the function to be called.
(type (func i32 i32 -> i32))
(func (type 1) (local) ...) ;; function 0
(func (type 1) (local)
i32.const 0
i32.const 1
;; calls fun. 0 with [0,1] as arguments
;; results in 0+1
call 0
drop ;; stack is now empty
i32.const 10
i32.const 15
i32.const 0
call_indirect i32 i32 -> i32)
(table
0 ;; "pointer" to function 0
1) ;; "pointer" to function 1
This formalisation is inspired by the implementation of the WebAssembly specification [42], which itself supports the full WebAssembly language. A state of the execution of a MiniWasm module consists of a stack of values (denoted \(V\)), a stack of administrative instructions, called the administrative stack (\(A\)), local variables (\(L\)), global variables (\(G\)), and the linear memory (\(M\)). Accessing the value in linear memory \(M\) at address \(v\) is denoted \(M[v]\), and updating the value at address \(v_1\) with value \(v_2\) is denoted \(M[v_1\mapsto v_2]\). Administrative instructions are either a plain MiniWasm instruction (plain), or synthetic indications that a function is being invoked (invoke), that a function call is being executed (frame), that a break has happened (br), or labels that identify blocks and loops (label). The linear memory is represented as a mapping from values to values. Functions have a type (ft), a number of local variables (\(n\)), and a body which is a sequence of instructions.
\[\begin{aligned} \textit{state}\in\textit{State} &= \textit{VStack}\times\textit{AStack}\times\textit{Locals}\times\textit{Globals}\times\textit{Mem}\\ L\in\textit{Locals} &= \textit{Value}^*\\ G\in\textit{Globals} &= \textit{Value}^* \\ v\in\textit{Value} &= \mathbb{I}^{32}\text{, the set of 32-bit integers}\\ V\in\textit{VStack} &= \textit{Value}^*\\ A\in\textit{AStack} &= \textit{AInstr}^*\\ M\in\textit{Mem} &= \textit{Value}\rightarrow\textit{Value}\\ \textit{ainstr}\in\textit{AInstr} &::= \textbf{plain}(\textit{instr})\\ &~|~ \textbf{invoke}(\textit{fun}, \textit{fidx}) ~|~ \textbf{frame}(n, F, V, A, \textit{fidx})\\ &~|~ \textbf{br}(n, V) ~|~ \textbf{label}(n, \textit{instr}^*, V, A) \\ \textit{fun} &::= (\textit{ft},n,\textit{instr}^*)\\ % n is nlocals \textit{fidx},n &\text{ are integers}\end{aligned}\]
The execution of a MiniWasm module is performed by the transition function defined in Figure 1. We use the following notation for sequences: \(a^*\) and \(A\) are sequences of \(a\), \(a\cdot A\) prepends element \(a\) to the sequence \(A\), \(A\cdot A'\) concatenates two sequences, \(A_i\) is the \(i\)th element of a sequence, \(\textsf{take}(n, A)\) keeps only the first \(n\) elements from a sequence, \(\textsf{drop}(n, A)\) drops the first \(n\) elements from a sequence, and \(\textsf{rev}(A)\) is the reverse of sequence \(A\). A sequence of \(0\), of length \(n\), can be constructed by \(\textsf{zeros}(n)\).
Data Instructions. drop
removes the top value from the value stack. i32.const
pushes a constant on the stack. For unary and binary operations, we assume that the behaviour of i32.
unop (resp. i32.
binop) is described by function unary (resp. binary). Accessing and modifying locals and globals is performed by accessing or updating the corresponding sequence of values (\(L\) or \(G\)). Loading from and storing to the linear memory is a matter of accessing or updating the memory mapping \(M\).
Blocks, Loops, and Breaks. The semantics for block
and loop
instructions is to place a label on the administrative stack, containing the body instructions to execute and the number of values to keep from the stack when breaking from the label. Breaking means jumping either out of the \(n\)th parent block (hence the number of values kept on the stack is the out arity of the block), or back to the beginning of the \(n\)th parent loop (hence the number of values kept on the stack is the in arity of the loop). The value stack within the label only contains the required number of values to execute the corresponding body. The notation \(\textbf{plain}(\textit{instr})^*\) in the transition rule means that all instructions of the sequence \(\textit{instr}^*\) are wrapped in a plain instruction. In case the top value of the stack is not 0, a br_if
instruction results in a br administrative instruction being pushed, with the same level as requested by the br
instruction. Otherwise, br_if
is a no-op.
Function Calls and Frames. A call
instruction extracts the body of the function called with the helper function function (which we assume has access to the function definitions of the module), and marks the function call through the invoke administrative instruction. In contrast, a call_indirect
instruction looks up the module table to extract the function in the table of the module at the index given by the value at the top of the stack. We assume function table-lookup has access to the module table and returns the corresponding function index. An invoke administrative instruction creates a new frame. The locals of the frame are constructed by first extracting the arguments to the function from the stack, to which are appended the declared locals of the function, which are initialised to 0. The body of the function is itself wrapped in a label. In case all instructions in a frame have been executed, the frame has finished its execution and it is popped from the administrative stack. Otherwise, we continue stepping within the frame.
Labels. When all instructions within a label have been executed, the label can be removed. If a br administrative instruction has to be executed within a label, and the break level is 0, the current label is removed, the value stack is updated, and the instructions that have to be executed are placed on the administrative stack. For other break levels, the current label is removed and the level of the break is decreased. For any other instruction, an execution step is performed on the state inside the label.
We now instrument the semantics of MiniWasm to characterise what has to be approximated by the static analysis presented in this paper. We are interested in the flow of information from function parameters and global variables to return values of functions, and to the global variables after a function execution. To this end, the instrumentation annotates values in the state space with taint information:
\[\begin{aligned} \textit{Value}^T &= \mathbb{I}^{32} \times \textit{TaintMap}\end{aligned}\] Taint information is a mapping from function indices to sets of taint sources: either parameters, or global variables. The bottom taint map \(\bot\) assigns no taint to all function indices.
\[\begin{aligned} \textit{TaintMap} &= (\mathbb{N} \rightarrow \mathcal{P}(\textit{Taint}))\\ \textit{Taint} &::= \textbf{parameter}(n) ~|~ \textbf{global}(n)\\ \bot &= \lambda n.\varnothing\end{aligned}\] For example, the value \((1, [1 \mapsto \{\textbf{parameter}(0)\}, 2\mapsto \{\textbf{parameter}(0),\textbf{parameter}(1)\}]\) means that the 32-bit integer value \(1\) has been computed in a way that is influenced by the first parameter of function 1 and by the first and second parameters of function 2: information flows from each of these parameters to this value. We only deal with explicit information flow in the present paper, which is in line with current research [43]–[45]. Taint maps are joined componentwise, e.g.,\([1\mapsto\{\textbf{parameter}(0)\}] \sqcup [1\mapsto\{\textbf{parameter}(1)\} = [1\mapsto\{\textbf{parameter}(0),\textbf{parameter}(1)\}]\).
A summary represents how information flows from parameters and global variables to the optional return value of a function (represented as a sequence of length 0 or 1, in the domain \(\mathcal{P}(\textit{Taint})^*\)), and to the global variables after the execution of the function (in the domain \(\mathcal{P}(\textit{Taint})^*\)). This is therefore represented as a tuple of which the first element is the taint of the optional return value, and the second element the taint of the global variables. The instrumented semantics computes a set of summaries \(S\). In order to construct a summary for a function with index fidx, helper function summary extracts from the instrumented information of the optional return value (\(t_r\)) the taint from the point of view of the current function (\(t_r[\textit{fidx}]\)), and similarly for the global variables after the function execution.
\[\begin{aligned} \textit{Summary} &= \mathcal{P}(\textit{Taint})^* \times \mathcal{P}(\textit{Taint})^*\\ S\in\textit{Summaries} &= \mathcal{P}(\textit{Summary})\\ \textsf{summary}(\textit{fidx},(\_, t_r)^*,(\_, t_g)^*) &= (t_r[\textit{fidx}]^*, t_g[\textit{fidx}]^*)\end{aligned}\] The instrumentation of helper functions is given below. Unary and binary operations propagate the information from their parameters to their result. Function \(\textsf{zeros}^T\) provides a sequence of zeros with no taint. Function taint-params (resp. taint-globals) marks parameters (resp. global variables) with their taint.
\[\begin{aligned}
\textsf{unary}^T(\textit{unop}, (v, t)) &= (\textsf{unary}(\textit{unop}, v), t) \\
\textsf{binary}^T(\textit{binop}, (v_1, t_1), (v_2, t_2)) &= (\textsf{binary}(\textit{binop}, v_1, v_2), t_1\sqcup t_2)\\
\textsf{zeros}^T(0) &= \epsilon\\
\textsf{zeros}^T(n) &= (0, \bot) \cdot \textsf{zeros}^T(n-1)\\
\textsf{taint-params}(\textit{fidx}, \epsilon, n) &= \epsilon\\
\textsf{taint-params}(\textit{fidx}, (v,t)\cdot V, n) &= (v, t\sqcup[\textit{fidx}\mapsto\{\textbf{parameter}(n)\}])\\&~~~\cdot\textsf{taint-params}(\textit{fidx}, V,n+1) \\
\textsf{taint-globals}(\textit{fidx}, \epsilon, n) &= \epsilon\\
\textsf{taint-globals}(\textit{fidx}, (v,t)\cdot G, n) &= (v, t\sqcup[\textit{fidx}\mapsto\{\textbf{global}(n)\}])\\&~~~\cdot\textsf{taint-globals}(\textit{fidx}, G, n+1)\end{aligned}\] Finally, the instrumentation of the semantics of MiniWasm is given in Figure 2, where the instrumentation is highlighted in grey. Only the non-trivial cases are given. The states are extended with a set of summaries produced, \(S\). The i32.const
instruction pushes a new value that has no taint. Unary (resp. binary) operations propagate information flow according to the \(\textsf{unary}^T\) (resp. \(\textsf{binary}^T\)) helper functions. Upon a function invocation, the parameters and global variables are marked with the appropriate taint, while other values have no taint. Upon a function return, the summary of the corresponding function call is added to the set of summaries.
With this instrumented semantics, the execution of a module produces a set of summaries, where each summary corresponds to information flow for one function execution.
We now turn to the static approximation of information flow function summaries. As motivated in Section 2, the static analysis is compositional: one function is analysed at a time, independently of its callers. Once a function summary has been inferred, it can subsequently be used to produce summaries for the callers of a function.
The information flow analysis we describe here is expressed as a data flow analysis on a control flow graph (CFG). A CFG is a set of basic blocks connected by edges, with a single entry block and a single exit block. Each MiniWasm basic block either contains one control instruction, or is a sequence of data instructions. Jumps occur from control instructions to the start of another block. The CFG of a MiniWasm function can be constructed using the traditional CFG construction approach [46]. Unlike analyses for other binary instruction formats[23], [28]–[31], analyses for WebAssembly do not require approximating control flow jumps: the target of jump instructions (br
, br_if
, br_table
) is always explicit.
The information flow analysis relies on the ability to identify and name elements of the stack, locals, and globals. To that end, the analysis performs a first pass over the CFG to infer the shape of the stack and to assign a unique name for each stack location, local variable and global variable. After this inference phase, the size of the value stack before and after each instruction is known precisely. The following example is annotated with the inferred names for the stack, the locals, and the globals.
(type (func i32 -> i32))
(func (type 1) (local i32)
;; stack: [], locals: [p0,l0], globals: [g0]
i32.const 1
;; stack: [i0], locals: [p0,l0], globals: [g0]
local.get 0
;; stack: [i1,i0], locals: [p0,l0], globals: [g0]
i32.add)
;; stack: [i2], locals: [p0,l0], globals: [g0]
This analysis phase is performed as a walk through the CFG. For each instruction, we statically know how it modifies the stack. For example, the i32.const 1
instruction pushes a new value on the stack, hence a new name (i0
) is created for that new stack location. The process is similar for instructions manipulating local and global variables, e.g.,local.set
updates a local variable, which is therefore assigned a new name. Special care needs to be taken upon merge points in the CFG: names that may differ have to be replaced. To do so, we extend the CFG with an extra merge node at every merge of the control flow. In this merge node, all names are replaced by fresh names. For example, if two nodes are connected to a merge node, where the top value of the stack has name \(x\) in one node, and name \(y\) in the other node, the top of the stack at the merge node will be \(z\), and the analysis has to account that \(z\) results from joining \(x\) and \(y\). This is similar to the use of \(\phi\)-nodes in compilers [47]. It is up to the analysis to correctly deal with these names.
The information flow analysis computes, before and after each instruction, a map of names to their taint, i.e., to the information that they may contain, represented as a set of \(\textit{Taint}\).
\[\begin{aligned} S\in\textit{State} &= \textit{Name} \rightarrow \mathcal{P}(\textit{Taint})\end{aligned}\] The initial state of the analysis for a function that has \(n\) parameters, in a module that has \(m\) global variables, is the following, where all parameters and globals are assigned their own taint. We assume \(p_i\) (resp. \(g_i\)) is the name corresponding to the \(i\)th parameter (resp. global variable).
\[\begin{aligned} S_0 = [&p_0 \mapsto \{\textbf{parameter}(0)\}, \dots p_n \mapsto \{\textbf{parameter}(n)\}, \\ &g_0 \mapsto \{\textbf{global}(0)\} \dots g_m \mapsto \{\textbf{global}(m)\}]\end{aligned}\] The analysis is then described as a state transformer for each instruction: \([\![\textit{instr} ]\!](S, V, L, G, V', L', G')\), defined in Figure 3. This state transformer computes the state after the instruction, given the state before the instruction (\(S\)), and using information from the runtime structure inference phase, namely the names of the value stack, locals, and globals before the instruction (\(V\), \(L\), and \(G\)), as well as after the instruction (\(V'\), \(L'\), and \(G'\)).
Instructions like drop
and i32.const
do not propagate any information. Unary and binary operations propagate information from their parameters to the resulting value. Instructions to manipulate locals and globals propagate information as follows. After getting the value of a local (resp. global) with local.get
(resp. global.get
), the top value of the stack is tainted with the taint from the local (resp. global). Modifying the value of a local (resp. global) with local.set
(resp. global.set
) propagates the information from the value set to the resulting local (resp. global).
For a store
instruction, information is propagated from the stored value to the special name mem
, which indicates that information may flow anywhere in the memory. This is a deliberately sound but coarse modelling of the memory, which cannot be refined without a precise modelling of numerical values. We leave such refinements for future work. For a load
instruction, information is propagated from the special name mem
to the resulting value.
For a \(\texttt{call}~n\) instruction, the function that is called is known precisely: it is the \(n\)th function of the module. The information is propagated according to the information flow summary of that function, which we will describe shortly. For a call_indirect
\(t\) instruction, the called function is not known precisely, because the analysis does not derive any numerical properties, and the function called depends on the top value of the stack. However, the type \(t\) of the called function is known. Hence, we know that a function that is the target of the call has to have a matching type, and has to be declared in the table of the module. We can therefore statically compute a set of functions that may be called, which is what we assume \(\textsf{matching-funs} : \textit{Type} \rightarrow \mathcal{P}(\mathbb{N})\) does, returning a set of function indices. Then, each of these functions’ summaries are applied, joining the results together.
An information flow summary describes how information propagates from a function’s parameters and globals to its optional return value and to the globals after its execution. We present these summaries formally, for the case where there is exactly one return value. The general case is similar. Auxiliary helper function mk-summary constructs a summary from the state \(S\) of the information flow analysis at the end of the function, where \(v\) is the name of the top of the stack, and \(G\) are the names of the global variables at the end of the function execution. The bottom summary \(\bot\) maps the return value and each global to the empty set.
\[\begin{aligned} \textit{Summary} &= \mathcal{P}(\textit{Taint}) \times \mathcal{P}(\textit{Taint})^*\\ \textsf{mk-summary}(S, v, G) &= (S[v], S[G_1]\cdots S[G_m]) \\ \bot &= (\varnothing,\varnothing\cdots\varnothing)\end{aligned}\]
Function apply-summary applies a summary by first constructing a substitution \(\sigma\), that will replace occurrences of parameters and globals in a set of taints by their taint before the call. Then, it sets the taint of the return value \(v'\) to the taint given by the summary after substitution, and similarly for all global variables. The summary itself is extracted by lookup-summary, which provides the summary for function \(n\).
\[\begin{aligned} \textsf{apply-summary}&(n, S, V, G, v'\cdot V', G')\\ &= S[v'\mapsto \sigma(r), G'_0\mapsto \sigma(G''_0),\dots, G'_m\mapsto\sigma(G''_m)] \\ \text{where } (r, G'') &= \textsf{lookup-summary}(n) \\ \sigma(X) &= \bigsqcup_{\textbf{parameter}(i)\in X} S[V_i] ~~\sqcup~~ \bigsqcup_{\textbf{global}(i)\in X} S[G_i]\end{aligned}\]
The state transformer presented in Figure 3 can be computed for all instructions of a function, following the traditional dataflow analysis approach [48]. The only special case to take into account is to handle merge nodes of the CFG, which were introduced by the runtime structure inference. For our taint analysis, in case of a merge node that merges a variable \(x\) with a variable \(y\) into a variable \(z\), the taint of \(z\) is \(S[x]\sqcup S[y]\).
We now have described a compositional analysis for MiniWasm functions. The main remaining question is: how do we know the summary to apply upon a function call? This is solved by performing a bottom-up inter-procedural analysis.
Computing the call graph. First, a call graph has to be computed for the WebAssembly module. Because all function call targets are either statically known in the case of the call
instruction, or can be approximated by their type and the content of the module table in the case of the call_indirect
instruction, we can derive an approximate call graph.
Computing the analysis schedule. From this approximate call graph, we apply Tarjan’s algorithm [49] to compute the strongly-connected components (SCCs) of the call graph, in topological order. The order of the SCCs gives us an analysis schedule: an SCC \(x\) precedes SCC \(y\) in this sequence if a function from \(y\) may call a function from \(x\), hence \(x\) has to be analysed before \(y\) so that its summary is available during the analysis of \(y\).
Analysing an SCC. The analysis of the entire module is therefore decomposed into the analysis of a set of functions that form an SCC. Each function of the SCC can then be analysed by the compositional analysis. A function call within an SCC either calls a function that has been fully analysed, as part of a previous SCC, or calls a function within the same SCC. In the first case, the summary of the called function is fully known and can be used. In the second case, the summary is unknown and we rely on the bottom summary. When one function of the SCC has been analysed, its callers that are within the same SCC are scheduled for (re-)analysis, as they can now rely on a more complete summary. This proceeds in a fixed-point fashion, until all summaries have reached their fixed point, and the analysis can proceed with the next SCC.
Handling Imported Functions. A MiniWasm module can have functions that are not defined, but that rather are imported. In WebAssembly, such functions usually stem from WASI. For each imported function, we manually encode an information flow summary. For example, WASI’s proc_exit
function takes an argument and terminates the execution of the program. Hence, its summary is the bottom summary: it does not propagate any information.
A summary does not retain information about how information flows within the memory during the function execution. This is a deliberate choice that breaks the soundness of the analysis at the benefit of precision. This similar to what the soundiness manifesto advocates [50]. As we shall see in our evaluation, we did not encounter any unsound result despite this choice: WebAssembly functions do not seem to store values that are coming from memory in the global variables, nor to return such values. It would be possible to retain this information by preserving the information from the special name mem
in the summary, and propagate it upon application. This may cause a precision loss though: any function that stores one of its arguments in memory taints the entire memory with the information from its argument. Subsequent loads in this function would be tainted with that information. We envision that retaining information about numerical values would allow a more precise modelling of the store, and hence more precise summaries even when the store is included.
We have implemented the information flow analysis presented here in around 3000 LOC of OCaml code.1 Our implementation is not limited to MiniWasm, but supports the complete WebAssembly standard. To assess the precision of the results of the analysis, we also instrumented the concrete interpreter that accompanies the WebAssembly specification[42], following our instrumentation presented in Section 4.
We have run our analysis on 34 C programs, coming from two benchmark sources: the first 30 are the entirety of the PolyBench benchmarks [51] which implement arithmetic kernels, and all feature indirect function calls. The remaining 4 are selected from the Language Benchmark Game [52], which aim at evaluating performance of programming languages, and do not require complex language features such as parallelism, nor indirect function calls. Both have been used in the evaluation of program analyses [53]–[56]. We compiled each program to WebAssembly using clang
10.0.1, and linked with a libc
implementation built on top of WASI [57]. We used a laptop with an Intel i7-8650U CPU, with 32GB of RAM, with OCaml 4.10.0.
Program | LOC | Time | Prec. | Program | LOC | Time | Prec. |
---|---|---|---|---|---|---|---|
2mm | 6815 | 2.09 | 6/9 | heat-3d | 5935 | 1.47 | 4/8 |
3mm | 6906 | 2.09 | 6/9 | jacobi-1d | 5708 | 1.44 | 4/8 |
adi | 6002 | 1.48 | 4/8 | jacobi-2d | 5811 | 1.46 | 4/8 |
atax | 6777 | 2.07 | 7/10 | lu | 6164 | 1.53 | 5/9 |
bicg | 6779 | 2.05 | 7/10 | ludcmp | 7202 | 2.18 | 7/10 |
cholesky | 6142 | 1.52 | 5/9 | mvt | 6663 | 2.04 | 6/9 |
correlation | 6769 | 2.07 | 6/9 | nussinov | 6809 | 2.11 | 6/10 |
covariance | 6672 | 2.07 | 6/9 | seidel-2d | 5756 | 1.42 | 4/8 |
deriche | 6052 | 1.52 | 4/8 | symm | 6771 | 2.04 | 6/9 |
doitgen | 6681 | 1.97 | 6/9 | syr2k | 6696 | 2.02 | 6/9 |
durbin | 5762 | 1.51 | 4/8 | syrk | 6642 | 2.02 | 6/9 |
fdtd-2d | 6760 | 1.99 | 6/9 | trisolv | 6590 | 2.10 | 6/9 |
floyd-warshall | 5760 | 1.38 | 4/8 | trmm | 6649 | 2.03 | 6/9 |
gemm | 6685 | 2.07 | 6/9 | fankuchredux | 439 | 0.08 | 4/4 |
gemver | 6778 | 2.05 | 6/9 | mandelbrot | 945 | 0.07 | 2/2 |
gesummv | 6632 | 2.00 | 6/9 | nbody | 295 | 0.03 | 2/2 |
gramschmidt | 6889 | 2.10 | 6/9 | spectral-norm | 221 | 0.06 | 3/3 |
To measure precision, we ran both the static analysis and the instrumented concrete interpreter (i.e., a dynamic analysis) in order to generate static and dynamic information flow summaries respectively. For each function, we evaluate the precision of the summary \(S_s\) produced by the static analysis as follows. First, we join all dynamic summaries \(S_{d_1},\dots,S_{d_n}\) produced for that function (each dynamic summary correspond to one function execution), resulting in \(S_d\). In case no summary has been produced by the dynamic analysis, this means that the function was not reached by the dynamic analysis. We therefore cannot assess the precision of the static analysis for that function, and we ignore these functions in our evaluation. For functions that have dynamic summaries, we have one of the following situations:
\(S_d = S_s\), in which case the static analysis is fully precise.
\(S_s\) misses information from \(S_d\), in which case the static analysis has produced an unsound result (i.e., a false negative). This did not happen in our evaluation, even though summaries are soundy [50] as they do not model the taint of the linear memory.
\(S_s\) includes more information than \(S_d\), e.g.,\(S_s = (\{\textbf{parameter}(0)\}, \epsilon)\) (the return value contains information from its first parameter) and \(S_d = (\varnothing, \epsilon)\) (the return value does not contain any information from its parameters). This means that the static analysis may have produced imprecise results for that function (i.e., a false positive). We therefore count this as an imprecise summary. In practice, however, it can be that some paths within a function that are not reached by the dynamic analysis, are taken into account by the static analysis. In this case, even though the summary derived by the static analysis may be fully precise, it will not be reported as so. Hence, the precision reported here is a lower bound on the actual precision of the analysis.
Computing this for all functions of a module gives us a ratio, e.g.,6/9 for the 2mm program, indicating that the static analysis computed 6 static summaries that are fully precise and are true positives, and 3 that are potentially false positives. This evaluation method is similar to how other taint analysers are evaluated [43], with the difference that we measure precision on function summaries rather than solely on unwanted information flows.
Table 1 lists the benchmark programs along with the time taken to analyse them, and our measure of precision. In total, 196157 LOCs are analysed in 56.13 seconds, with a precision lower bound of 64% for the resulting summaries. The analysis running time is low: each benchmarks is analysed within 2 seconds, and on average 3495 LOC are analysed per second, with peaks up to 13500 LOC/s for shorter benchmarks like mandelbrot
. The total precision is of 64%, which is in line with other static taint analysis tools [43], [58]. We notice however that the precision varies depending on the benchmark suite: it is of 100% on programs from the Language Benchmark Game, and of 62% on programs from the PolyBench benchmark suite. A closer look at the results for the PolyBench benchmark suite shows that many of the programs share common functions, some of which are analysed imprecisely, which propagates to all analysis results.
Even though WebAssembly is a recent standard, there has been some work focusing on its analysis and verification. Lehmann et al. [59] present a dynamic analysis framework for WebAssembly. Instead of instrumenting a WebAssembly interpreter (Section 4), one could instrument WebAssembly programs with Wasabi to compute the same information. Watt et al. [60] propose a separation logic for WebAssembly, enabling manual verification of functions and modules. Our approach derives less expressive properties, but in a fully automated way. Another existing approach, taken by CT-Wasm, is to provide a rich type system for WebAssembly [61].
Summary-based analyses have been used on numerous occasions to achieve scalable static analyses, as an alternative to whole-program analyses. Saturn [62] is summary-based, using logic programming to analyse C programs. The design of summaries is left to the analysis designer. A points-to analysis developed with Saturn can scale up to the size of the Linux kernel. Tang et al. [63] present summaries in the presence of callbacks in libraries, when a function may have to be reanalysed even if a summary has already been produced, due to new reachability relations. This is not a situation that can occur in our case as each function is analysed independently of its callers. Cassez et al. [64] demonstrate the use of function summaries in a modular analysis, using trace abstraction refinement. Yan et al. [32] propose to redesign the Soot framework for supporting summaries, arguing that summaries should be integrated within program analysis frameworks.
A taint analysis identifies information flow from a source (e.g., user input) to a sink (e.g., a database query). There exists general-purpose static taint analyses [58], [65] and specific static taint analyses, for Android [66], [67], web applications [68], JavaScript [45], and event-driven programs [69].
The notion of taint summary is not new. Zhang et al. [70] rely on taint summaries to optimise a dynamic taint analysis. Staicu et al. [45] dynamically extract taint specifications for JavaScript libraries, which can then be used by static analysis tools. The taint specifications inferred are more specific than our summaries due to the object-oriented nature of JavaScript.
There is extensive work on static analysis of binaries for different platforms. We already covered existing work for WebAssembly, and we cover here work on different platforms, that are the most related to this work. Most importantly, Ballabriga et al. [53] recently presented a static analysis that focuses on memory indirections by the use of polyhedral numerical domains, applied to ARM binaries. Adapting this to our setting would allow the analysis to properly support memory indirections. Moreover, the use of numeric domains as done by related work on binary analysis [23], [71] could improve the precision of our analysis: knowledge of numerical properties would enable resolving indirect calls with more precision, as well as a better modelling of the memory. The Soot framework [72] analyses JVM bytecode, which is also a stack-based binary format. Our runtime structure inference shares similarities with Soot’s analysis phases on Baf bytecode [73], namely that the shape of the stack needs to be inferred before and after each instruction.
The WebAssembly standard is gaining popularity, and there is a need for analysis tools to assess the quality of WebAssembly modules. In this paper, we propose the first compositional static analysis for WebAssembly, in the form of an information flow analysis. This analysis is compositional, enabling it to analyse functions independently of their calling context, resulting in a scalable analysis. We evaluated our resulting analysis on 34 WebAssembly modules, showing fast analysis times for a moderate precision.
This work was partially supported by the “Cybersecurity Initiative Flanders”.
[1] A. Haas et al., “Bringing the web up to speed with webassembly,” in Proceedings of the 38th ACM SIGPLAN conference on programming language design and implementation, PLDI 2017, barcelona, spain, june 18-23, 2017, 2017, pp. 185–200.
[2] “WebAssembly.” https://webassembly.org/.
[3] A. Rossberg, Ed., “WebAssembly Core Specification,” W3C, Dec. 2019. [Online]. Available: https://www.w3.org/TR/wasm-core-1/.
[4] B. Alliance, “WASI: The webassembly system interface.” https://wasi.dev/.
[5] A. Hall and U. Ramachandran, “An execution model for serverless functions at the edge,” in Proceedings of the international conference on internet of things design and implementation, iotdi 2019, montreal, qc, canada, april 15-18, 2019, 2019, pp. 225–236.
[6] R. G. Singh and C. Scholliers, “WARDuino: A dynamic webassembly virtual machine for programming microcontrollers,” in Proceedings of the 16th ACM SIGPLAN international conference on managed programming languages and runtimes, MPLR 2019, athens, greece, october 21-22, 2019, 2019, pp. 27–36.
[7] S. S. Salim, A. Nisbet, and M. Luján, “Towards a webassembly standalone runtime on graalvm,” in Proceedings companion of the 2019 ACM SIGPLAN international conference on systems, programming, languages, and applications: Software for humanity, SPLASH 2019, athens, greece, october 20-25, 2019, 2019, pp. 15–16.
[8] S. S. Salim, A. Nisbet, and M. Luján, “TruffleWasm: A webassembly interpreter on graalvm,” in VEE ’20: 16th ACM SIGPLAN/SIGOPS international conference on virtual execution environments, virtual event, lausanne, switzerland, march 17, 2020, 2020, pp. 88–100.
[9] “WasmTime: A standalone runtime for webassembly.” https://github.com/bytecodealliance/wasmtime.
[10] “Wasm3: A high performance webassembly interpreter written in c.” https://github.com/wasm3/wasm3.
[11] “Lucet, the sandboxing webassembly compiler.” https://github.com/bytecodealliance/lucet.
[12] “Wasmer: The leading webassembly runtime supporting wasi and emscripten.” https://github.com/wasmerio/wasmer.
[13] M. Musch, C. Wressnegger, M. Johns, and K. Rieck, “New kid on the web: A study on the prevalence of webassembly in the wild,” in Detection of intrusions and malware, and vulnerability assessment - 16th international conference, DIMVA 2019, gothenburg, sweden, june 19-20, 2019, proceedings, 2019, pp. 23–42.
[14] D. Lehmann, J. Kinder, and M. Pradel, “Everything old is new again: Binary security of webassembly,” 2020.
[15] C. Disselkoen, J. Renner, C. Watt, T. Garfinkel, A. Levy, and D. Stefan, “Position paper: Progressive memory safety for webassembly,” in Proceedings of the 8th international workshop on hardware and architectural support for security and privacy, hasp@ISCA 2019, june 23, 2019, 2019, pp. 4:1–4:8.
[16] J. Sun et al., “SELWasm: A code protection mechanism for webassembly,” in 2019 IEEE intl conf on parallel & distributed processing with applications, big data & cloud computing, sustainable computing & communications, social computing & networking, ispa/bdcloud/socialcom/sustaincom 2019, xiamen, china, december 16-18, 2019, 2019, pp. 1099–1106.
[17] D. Goltzsche, M. Nieke, T. Knauth, and R. Kapitza, “AccTEE: A webassembly-based two-way sandbox for trusted resource accounting,” in Proceedings of the 20th international middleware conference, middleware 2019, davis, ca, usa, december 9-13, 2019, 2019, pp. 123–135.
[18] W. Wang, B. Ferrell, X. Xu, K. W. Hamlen, and S. Hao, “SEISMIC: secure in-lined script monitors for interrupting cryptojacks,” in Computer security - 23rd european symposium on research in computer security, ESORICS 2018, barcelona, spain, september 3-7, 2018, proceedings, part II, 2018, pp. 122–142.
[19] W. Bian, W. Meng, and Y. Wang, “Poster: Detecting webassembly-based cryptocurrency mining,” in Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, CCS 2019, london, uk, november 11-15, 2019, 2019, pp. 2685–2687.
[20] W. Fu, R. Lin, and D. Inge, “TaintAssembly: Taint-based information flow control tracking for webassembly,” CoRR, vol. abs/1802.01050, 2018.
[21] A. Szanto, T. Tamm, and A. Pagnoni, “Taint tracking for webassembly,” CoRR, vol. abs/1807.08349, 2018.
[22] X. Meng and B. P. Miller, “Binary code is not easy,” in Proceedings of the 25th international symposium on software testing and analysis, ISSTA 2016, saarbrücken, germany, july 18-20, 2016, 2016, pp. 24–35.
[23] A. Sepp, B. Mihaila, and A. Simon, “Precise static analysis of binaries by extracting relational information,” in 18th working conference on reverse engineering, WCRE 2011, limerick, ireland, october 17-20, 2011, 2011, pp. 357–366.
[24] P. Cousot and R. Cousot, “Modular static program analysis,” in Compiler construction, 11th international conference, CC 2002, held as part of the joint european conferences on theory and practice of software, ETAPS 2002, grenoble, france, april 8-12, 2002, proceedings, 2002, pp. 159–178.
[25] “WebAssembly: Security.” https://webassembly.org/docs/security/.
[26] B. McFadden, T. Lukasiewicz, J. Dileo, and J. Engler, “Security chasms of wasm.” Technical report, 08 2018.[Online, 2018.
[27] C. Watt, “Mechanising and verifying the webassembly specification,” in Proceedings of the 7th ACM SIGPLAN international conference on certified programs and proofs, CPP 2018, los angeles, ca, usa, january 8-9, 2018, 2018, pp. 53–65.
[28] J. Kinder and H. Veith, “Jakstab: A static analysis platform for binaries,” in Computer aided verification, 20th international conference, CAV 2008, princeton, nj, usa, july 7-14, 2008, proceedings, 2008, pp. 423–427.
[29] M. H. Nguyen, T. B. Nguyen, T. T. Quan, and M. Ogawa, “A hybrid approach for control flow graph construction from binary code,” in 20th asia-pacific software engineering conference, APSEC 2013, ratchathewi, bangkok, thailand, december 2-5, 2013 - volume 2, 2013, pp. 159–164.
[30] A. Djoudi and S. Bardin, “BINSEC: binary code analysis with low-level regions,” in Tools and algorithms for the construction and analysis of systems - 21st international conference, TACAS 2015, 2015, london, uk, april 11-18, 2015. Proceedings, 2015, pp. 212–217.
[31] G. Balakrishnan and T. W. Reps, “Analyzing memory accesses in x86 executables,” in Compiler construction, 13th international conference, CC 2004, barcelona, spain, march 29 - april 2, 2004, proceedings, 2004, pp. 5–23.
[32] D. Yan, G. (Harry) Xu, and A. Rountev, “Rethinking soot for summary-based whole-program analysis,” in Proceedings of the ACM SIGPLAN international workshop on state of the art in java program analysis, SOAP 2012, beijing, china, june 14, 2012, 2012, pp. 9–14.
[33] N. Grech, G. Fourtounis, A. Francalanza, and Y. Smaragdakis, “Shooting from the heap: Ultra-scalable static analysis with heap snapshots,” in Proceedings of the 27th ACM SIGSOFT international symposium on software testing and analysis, ISSTA 2018, amsterdam, the netherlands, july 16-21, 2018, 2018, pp. 198–208.
[34] E. Goubault, S. Putot, and F. Védrine, “Modular static analysis with zonotopes,” in Static analysis - 19th international symposium, SAS 2012, deauville, france, september 11-13, 2012. Proceedings, 2012, pp. 24–40.
[35] A. Miné, “Relational thread-modular static value analysis by abstract interpretation,” in Verification, model checking, and abstract interpretation - 15th international conference, VMCAI 2014, san diego, ca, usa, january 19-21, 2014, proceedings, 2014, pp. 39–58.
[36] M. Journault, A. Miné, and A. Ouadjaout, “Modular static analysis of string manipulations in C programs,” in Static analysis - 25th international symposium, SAS 2018, freiburg, germany, august 29-31, 2018, proceedings, 2018, pp. 243–262.
[37] Q. Stiévenart, J. Nicolay, W. D. Meuter, and C. D. Roover, “A general method for rendering static analyses for diverse concurrency models modular,” J. Syst. Softw., vol. 147, pp. 17–45, 2019.
[38] S. Blackshear, N. Gorogiannis, P. W. O’Hearn, and I. Sergey, “RacerD: Compositional static race detection,” Proc. ACM Program. Lang., vol. 2, no. OOPSLA, pp. 144:1–144:28, 2018.
[39] A. Farzan and Z. Kincaid, “Compositional bitvector analysis for concurrent programs with nested locks,” in Static analysis - 17th international symposium, SAS 2010, perpignan, france, september 14-16, 2010. Proceedings, 2010, pp. 253–270.
[40] P. Godefroid, A. V. Nori, S. K. Rajamani, and S. Tetali, “Compositional may-must program analysis: Unleashing the power of alternation,” in Proceedings of the 37th ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL 2010, madrid, spain, january 17-23, 2010, 2010, pp. 43–56.
[41] J. Whaley and M. C. Rinard, “Compositional pointer and escape analysis for java programs,” in Proceedings of the 1999 ACM SIGPLAN conference on object-oriented programming systems, languages & applications (OOPSLA ’99), denver, colorado, usa, november 1-5, 1999, 1999, pp. 187–206.
[42] “WebAssembly specification, reference interpreter, and test suite.” https://github.com/WebAssembly/spec.
[43] L. Qiu, Y. Wang, and J. Rubin, “Analyzing the analyzers: FlowDroid/iccta, amandroid, and droidsafe,” in Proceedings of the 27th ACM SIGSOFT international symposium on software testing and analysis, ISSTA 2018, amsterdam, the netherlands, july 16-21, 2018, 2018, pp. 176–186.
[44] C. Staicu, D. Schoepe, M. Balliu, M. Pradel, and A. Sabelfeld, “An empirical study of information flows in real-world javascript,” 2019.
[45] C.-A. Staicu, M. T. Torp, M. Schäfer, A. Møller, and M. Pradel, “Extracting taint specifications for javascript libraries,” 2020.
[46] A. V. Aho, R. Sethi, and J. D. Ullman, Compilers: Principles, techniques, and tools. Addison-Wesley, 1986.
[47] B. K. Rosen, M. N. Wegman, and F. K. Zadeck, “Global value numbers and redundant computations,” in Conference record of the fifteenth annual ACM symposium on principles of programming languages, san diego, california, usa, january 10-13, 1988, 1988, pp. 12–27.
[48] F. Nielson, H. R. Nielson, and C. Hankin, Principles of program analysis. Springer, 1999.
[49] R. E. Tarjan, “Depth-first search and linear graph algorithms,” SIAM J. Comput., vol. 1, no. 2, pp. 146–160, 1972.
[50] B. Livshits et al., “In defense of soundiness: A manifesto,” Communications of the ACM, vol. 58, no. 2, pp. 44–46, 2015.
[51] L.-N. Pouchet, “Polybench: The polyhedral benchmark suite.” https://web.cse.ohio-state.edu/~pouchet.2/software/polybench.
[52] B. Fulgham and I. Gouy, “The computer language benchmarks game.” https://benchmarksgame-team.pages.debian.net/benchmarksgame/.
[53] C. Ballabriga, J. Forget, L. Gonnord, G. Lipari, and J. Ruiz, “Static analysis of binary code with memory indirections using polyhedra,” in Verification, model checking, and abstract interpretation - 20th international conference, VMCAI 2019, cascais, portugal, january 13-15, 2019, proceedings, 2019, pp. 114–135.
[54] J. Nicolay, Q. Stiévenart, W. D. Meuter, and C. D. Roover, “Effect-driven flow analysis,” in Verification, model checking, and abstract interpretation - 20th international conference, VMCAI 2019, cascais, portugal, january 13-15, 2019, proceedings, 2019, pp. 247–274.
[55] D. Chen, F. Liu, C. Ding, and S. Pai, “Locality analysis through static parallel sampling,” in Proceedings of the 39th ACM SIGPLAN conference on programming language design and implementation, PLDI 2018, philadelphia, pa, usa, june 18-22, 2018, 2018, pp. 557–570.
[56] G. Savrun-Yeniçeri et al., “Efficient interpreter optimizations for the JVM,” in Proceedings of the 2013 international conference on principles and practices of programming on the java platform: Virtual machines, languages, and tools, stuttgart, germany, september 11-13, 2013, 2013, pp. 113–123.
[57] “WASI libc implementation for webassembly.” https://github.com/WebAssembly/wasi-libc.
[58] O. Tripp, M. Pistoia, S. J. Fink, M. Sridharan, and O. Weisman, “TAJ: effective taint analysis of web applications,” in Proceedings of the 2009 ACM SIGPLAN conference on programming language design and implementation, PLDI 2009, dublin, ireland, june 15-21, 2009, 2009, pp. 87–97.
[59] D. Lehmann and M. Pradel, “Wasabi: A framework for dynamically analyzing webassembly,” in Proceedings of the twenty-fourth international conference on architectural support for programming languages and operating systems, ASPLOS 2019, providence, ri, usa, april 13-17, 2019, 2019, pp. 1045–1058.
[60] C. Watt, P. Maksimovic, N. R. Krishnaswami, and P. Gardner, “A program logic for first-order encapsulated webassembly,” in 33rd european conference on object-oriented programming, ECOOP 2019, july 15-19, 2019, london, united kingdom, 2019, pp. 9:1–9:30.
[61] C. Watt, J. Renner, N. Popescu, S. Cauligi, and D. Stefan, “CT-wasm: Type-driven secure cryptography for the web ecosystem,” Proc. ACM Program. Lang., vol. 3, no. POPL, pp. 77:1–77:29, 2019.
[62] A. Aiken, S. Bugrara, I. Dillig, T. Dillig, B. Hackett, and P. Hawkins, “An overview of the saturn project,” in Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on program analysis for software tools and engineering, paste’07, san diego, california, usa, june 13-14, 2007, 2007, pp. 43–48.
[63] H. Tang, X. Wang, L. Zhang, B. Xie, L. Zhang, and H. Mei, “Summary-based context-sensitive data-dependence analysis in presence of callbacks,” in Proceedings of the 42nd annual ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL 2015, mumbai, india, january 15-17, 2015, 2015, pp. 83–95.
[64] F. Cassez, C. Müller, and K. Burnett, “Summary-based inter-procedural analysis via modular trace refinement,” in 34th international conference on foundation of software technology and theoretical computer science, FSTTCS 2014, december 15-17, 2014, new delhi, india, 2014, pp. 545–556.
[65] O. Tripp, M. Pistoia, P. Cousot, R. Cousot, and S. Guarnieri, “Andromeda: Accurate and scalable security analysis of web applications,” in Fundamental approaches to software engineering - 16th international conference, FASE 2013, rome, italy, march 16-24, 2013. Proceedings, 2013, pp. 210–225.
[66] S. Arzt et al., “FlowDroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps,” in ACM SIGPLAN conference on programming language design and implementation, PLDI ’14, edinburgh, united kingdom - june 09 - 11, 2014, 2014, pp. 259–269.
[67] W. Klieber, L. Flynn, W. Snavely, and M. Zheng, “Practical precise taint-flow static analysis for android app sets,” in Proceedings of the 13th international conference on availability, reliability and security, ARES 2018, hamburg, germany, august 27-30, 2018, 2018, pp. 56:1–56:7.
[68] N. Jovanovic, C. Kruegel, and E. Kirda, “Static analysis for detecting taint-style vulnerabilities in web applications,” J. Comput. Secur., vol. 18, no. 5, pp. 861–907, 2010.
[69] J. D. Bleser, Q. Stiévenart, J. Nicolay, and C. D. Roover, “Static taint analysis of event-driven scheme programs,” in Proceedings of the 10th european lisp symposium (ELS 2017), brussels, belgium, april 3-4, 2017, 2017, pp. 80–87.
[70] R. Zhang, S. Huang, and Z. Qi, “Efficient taint analysis with taint behavior summary,” in Third international conference on communications and mobile computing, CMC 2011, qingdao, china, 18-20 april 2011, 2011, pp. 11–14.
[71] A. Miné, “Abstract domains for bit-level machine integer and floating-point operations,” in ATx’12/wing’12: Joint proceedings of the workshops on automated theory eXploration and on invariant generation, manchester, uk, june 2012, 2012, pp. 55–70.
[72] R. Vallée-Rai, P. Co, E. Gagnon, L. J. Hendren, P. Lam, and V. Sundaresan, “Soot - a java bytecode optimization framework,” in Proceedings of the 1999 conference of the centre for advanced studies on collaborative research, november 8-11, 1999, mississauga, ontario, canada, 1999, p. 13.
[73] R. Vallée-Rai, E. Gagnon, L. J. Hendren, P. Lam, P. Pominville, and V. Sundaresan, “Optimizing java bytecode using the soot framework: Is it feasible?” in Compiler construction, 9th international conference, CC 2000, held as part of the european joint conferences on the theory and practice of software, ETAPS 2000, berlin, germany, march 25 - april 2, 2000, proceedings, 2000, pp. 18–34.