Initial import

LoxiGen is the work of several developers, not just myself.
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..921eb25
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,6 @@
+*.pyc
+*~
+target_code
+local
+.loxi_ts.*
+.loxi_gen_files
diff --git a/INTERNALS b/INTERNALS
new file mode 100644
index 0000000..bd4546a
--- /dev/null
+++ b/INTERNALS
@@ -0,0 +1,76 @@
+
+Here are a few notes about the LOXI processing flow.
+
+Currently there are two pieces of input for each version to be supported.
+
+(1) The original openflow.h header file.  This is parsed to extract
+identifiers such as #defines and enum definitions.  These are in the
+'canonical' directory.
+
+(2) A specially processed list of structs derived from the original
+openflow.h header file.  These are the structs that represent the
+protocol on the wire with the following minor modifications:
+** ofp_header structures instances are replaced by their contents
+** Arrays are replaced with the syntax 'data_type[length] idenitifier'.
+** Lists of objects are called out explicitly as 'list(data_type) identifier'
+** Match structures are renamed to be version specific
+** Each flavors of a flow modify (add, modify, modify strict, delete
+and delete strict) are called out as different objects
+** Each action type (for instance) is called out as its own type.
+
+Copyright 2012, Big Switch Networks, Inc.
+
+Enumerations/defines give semantic values for two contexts:
+
+* Internal management of objects, for example, the particular values that
+indicate a message is an Echo message or an action is an output action.
+These values, like the wire format, are generally not of interest to 
+the users of LOXI.
+
+* External representation of information. These are values which users of
+LOXI need to know about, at least through an identifier.  Examples include
+OFP_TCP_PORT, OFP_MAX_TABLE_NAME_LEN or OFPP_MAX.  
+
+In general, processing proceeds by:
+
+(1) Extracting information from each version's input files.
+
+(2) Unifying the information across all versions, allowing the
+identification of commonalities and differnces.
+
+(3) Calling the language specific generation routines for each
+target file.  The list of files to generate and the map from 
+file to generating function is given in the language specific
+Python file such as lang_c.py at the top level.
+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+The code layout is as follows (explanations below):
+
+BigCode/Modules/
+    LoxiGen/
+        Makefile
+        loxigen.py      Entry point executable
+        of_g.py         Global variables   
+        lang_c.py, ...  Language specific
+        loxi_front_end/ Python functions for processing input
+        loxi_utils/     General utility functions
+        canonical/      openflow.h header files
+            openflow.h-<of-version>
+        openflow_input/ pre-processed openflow.h input
+            structs-<of-version>
+        c_gen/          Python functions for C code generation
+        c_template/     Template including non-autogen files
+        utest/          Simple Python scripts to test functions
+
+For C code generation, the output is placed in the BigCode module format.
+First, the C template directory is copied over to a target directory.
+Then the automatically generated files are created and placed in the
+proper locations in the target directory.  Then the result is tarred
+up for overlay onto another location.
+
+To test the code locally, the target file is untarred into a local
+directory and a special make file (in c_gen/Makefile.local) is copied
+into the local directory.
+
+
diff --git a/LoxiGen.mk b/LoxiGen.mk
new file mode 100644
index 0000000..bf7b0f4
--- /dev/null
+++ b/LoxiGen.mk
@@ -0,0 +1,39 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+#
+# Static for LoxiGen
+#
+LOXIGEN_DIR := $(dir $(lastword $(MAKEFILE_LIST)))
+
+LoxiGen:
+	$(MAKE) -C $(LOXIGEN_DIR) all
+
+ALL_TARGETS += LoxiGen
+
+DEPENDMODULES_XHEADER_EXCLUDES += LoxiGen
+
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..2fff433
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,91 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+# Available targets: all, c, python, clean
+
+# This Makefile is just for convenience. Users that need to pass additional
+# options to loxigen.py are encouraged to run it directly.
+
+# Where to put the generated code.
+LOXI_OUTPUT_DIR = loxi_output
+
+# Generated files depend on all Loxi code and input files
+LOXI_PY_FILES=$(shell find \( -name loxi_output -prune \
+                             -o -name templates -prune \
+                             -o -true \
+                           \) -a -name '*.py')
+LOXI_TEMPLATE_FILES=$(shell find */templates -type f -a \
+                                 \! \( -name '*.cache' -o -name '.*' \))
+INPUT_FILES = $(wildcard openflow_input/*) $(wildcard canonical/*)
+
+all: c python
+
+c: .loxi_ts.c
+
+.loxi_ts.c: ${LOXI_PY_FILES} ${LOXI_TEMPLATE_FILES} ${INPUT_FILES}
+	./loxigen.py --install-dir=${LOXI_OUTPUT_DIR} --lang=c
+	touch $@
+
+python: .loxi_ts.python
+
+.loxi_ts.python: ${LOXI_PY_FILES} ${LOXI_TEMPLATE_FILES} ${INPUT_FILES}
+	./loxigen.py --install-dir=${LOXI_OUTPUT_DIR} --lang=python
+	touch $@
+
+clean:
+	rm -rf loxi_output # only delete generated files in the default directory
+	rm -f loxigen.log loxigen-test.log .loxi_ts.c .loxi_ts.python
+
+debug:
+	@echo "LOXI_OUTPUT_DIR=\"${LOXI_OUTPUT_DIR}\""
+	@echo
+	@echo "LOXI_PY_FILES=\"${LOXI_PY_FILES}\""
+	@echo
+	@echo "LOXI_TEMPLATE_FILES=\"${LOXI_TEMPLATE_FILES}\""
+	@echo
+	@echo "INPUT_FILES=\"${INPUT_FILES}\""
+
+check:
+	@echo Sending output to loxigen-test.log
+	cd utest && \
+		./identifiers_test.py > ../loxigen-test.log && \
+		./c_utils_test.py >> ../loxigen-test.log && \
+		./of_h_utils_test.py >> ../loxigen-test.log
+	PYTHONPATH=. ./utest/test_parser.py
+
+pylint:
+	pylint -E ${LOXI_PY_FILES}
+
+.PHONY: all clean debug check pylint c python
+
+ifdef BIGCODE
+# Internal build system compatibility
+MODULE := LoxiGen
+LOXI_OUTPUT_DIR = ${BIGCODE}/Modules
+modulemake:
+.PHONY: modulemake
+endif
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..63b91f3
--- /dev/null
+++ b/README.md
@@ -0,0 +1,41 @@
+Introduction
+============
+
+LoxiGen is a tool that generates OpenFlow protocol libraries for a number of
+languages. It is composed of a frontend that parses wire protocol descriptions
+and a backend for each supported language (currently C and Python, with Java on
+the way).
+
+
+Usage
+=====
+
+You can run LoxiGen directly from the repository. There's no need to install it,
+and it has no dependencies beyond Python 2.6+.
+
+To generate the libraries for all languages:
+
+```
+make
+```
+
+To generate the library for a single language:
+
+```
+make c
+```
+
+The currently supported languages are `c` and `python`.
+
+The generated libraries will be under the `loxi_output` directory. This can be
+changed with the `LOXI_OUTPUT_DIR` environment variable when using the Makefile.
+
+Each generated library comes with its own set of documentation in the standard
+format for that language. Please see that documentation for more details on
+using the generated libraries.
+
+Contributing
+============
+
+Please fork the repository on GitHub and send us a pull request. You might also
+be interested in the INTERNALS file which has notes about how LoxiGen works.
diff --git a/TODO b/TODO
new file mode 100644
index 0000000..2301b11
--- /dev/null
+++ b/TODO
@@ -0,0 +1,17 @@
+
+
+For lists of uint32 and uint8, the easy approach was taken which was to
+add OF types (wrappers) for instances of these types.  This makes some
+things much easier, but makes it awkward, for example, to simply append
+a 32-bit value to the end of a list.  Some helper functions should be
+added to make this easier.  (Maybe?  Seems to be working so far.)
+
+With the added support for auto declaring var len arrays indexed by
+version, lots of trailing commas have been added.
+
+Add type and length support for action_id classes.
+
+Table feature prop uses 0xfffe instead of 0xffff as the experimenter value.
+
+Support TABLE_FEATURE_PROP_EXPERIMENTER/_MISS
+
diff --git a/c_gen/Makefile.local b/c_gen/Makefile.local
new file mode 100644
index 0000000..fb544d5
--- /dev/null
+++ b/c_gen/Makefile.local
@@ -0,0 +1,91 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+# Makefile to test LOCI generated files locally
+# Assumes code is in BigCode layout
+# Normally the loci and locitest code is put into a directory 'local' with
+# this make file copied to its top level.
+
+LOCI_SOURCES:=$(wildcard Modules/loci/module/src/*.c)
+LOCI_OBJECTS:=$(LOCI_SOURCES:.c=.o)
+LOCI_HEADERS:=$(wildcard Modules/loci/module/inc/loci/*.h)
+LOCI_HEADERS+=$(wildcard Modules/loci/module/src/*.h)
+
+LOCITEST_SOURCES:=$(wildcard Modules/locitest/module/src/*.c)
+LOCITEST_OBJECTS:=$(LOCITEST_SOURCES:.c=.o)
+LOCITEST_HEADERS:=$(wildcard Modules/locitest/module/inc/locitest/*.h)
+LOCITEST_HEADERS+=$(wildcard Modules/locitests/module/src/*.h)
+
+LOCITEST_MAIN:=Modules/locitest/utest/main.c
+
+ALL_SOURCES:=${LOCITEST_SOURCES} ${LOCI_SOURCES}
+ALL_OBJECTS:=$(ALL_SOURCES:.c=.o)
+ALL_HEADERS:=${LOCITEST_HEADERS} ${LOCI_HEADERS}
+
+CFLAGS:=-Wall -g -I Modules/loci/module/inc -I Modules/locitest/module/inc -O0
+
+all: test
+
+test: locitest
+	./locitest
+
+%.o: %.c ${ALL_HEADERS}
+	gcc -c ${CFLAGS} $< -o $@
+
+%.E: %.c ${ALL_HEADERS}
+	gcc -E ${CFLAGS} $< -o $@
+
+libloci.a: ${LOCI_OBJECTS}
+	ar rcu libloci.a ${LOCI_OBJECTS}
+	ranlib libloci.a
+
+liblocitest.a: ${LOCITEST_OBJECTS}
+	ar rcu liblocitest.a ${LOCITEST_OBJECTS}
+	ranlib liblocitest.a
+
+# Test executable
+locitest: ${LOCITEST_MAIN} libloci.a liblocitest.a
+	gcc $< ${CFLAGS} -l locitest -l loci -L . -o $@
+
+show:
+	@echo ALL_SOURCES ${ALL_SOURCES}
+	@echo ALL_OBJECTS ${ALL_OBJECTS}
+	@echo ALL_HEADERS ${ALL_HEADERS}
+
+help:
+	@echo "Run loci unit tests locally"
+
+clean:
+	find . -name '*.o' | xargs rm -f
+	find . -name '*.E' | xargs rm -f
+	rm -f libloci.a liblocitest.a locitest
+
+# TBD
+doc: ${ALL_HEADERS}
+	doxygen Doxyfile
+
+.PHONY: all test lib clean show doc
diff --git a/c_gen/__init__.py b/c_gen/__init__.py
new file mode 100644
index 0000000..5e4e379
--- /dev/null
+++ b/c_gen/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
diff --git a/c_gen/c_code_gen.py b/c_gen/c_code_gen.py
new file mode 100644
index 0000000..0a10715
--- /dev/null
+++ b/c_gen/c_code_gen.py
@@ -0,0 +1,3616 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@file code_gen.py
+Code generation functions for LOCI
+"""
+
+import sys
+import of_g
+import c_match
+from generic_utils import *
+import c_gen.c_type_maps as c_type_maps
+import loxi_front_end.type_maps as type_maps
+import loxi_front_end.flags as flags
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.identifiers as identifiers
+
+# 'property' is for queues. Could be trouble
+
+
+################################################################
+#
+# Misc helper functions
+#
+################################################################
+
+def h_file_to_define(name):
+    """
+    Convert a .h file name to the define used for the header
+    """
+    h_name = name[:-2].upper()
+    h_name = "_" + h_name + "_H_"
+    return h_name
+
+def enum_name(cls):
+    """
+    Return the name used for an enum identifier for the given class
+    @param cls The class name
+    """
+    return loxi_utils.enum_name(cls)
+
+def member_returns_val(cls, m_name):
+    """
+    Should get accessor return a value rather than void
+    @param cls The class name
+    @param m_name The member name
+    @return True if of_g config and the specific member allow a 
+    return value.  Otherwise False
+    """
+    m_type = of_g.unified[cls]["union"][m_name]["m_type"]
+    return (config_check("get_returns") =="value" and 
+            m_type in of_g.of_scalar_types)
+
+# TODO serialize match outside accessor?
+def accessor_return_type(a_type, m_type):
+    if loxi_utils.accessor_returns_error(a_type, m_type):
+        return "int WARN_UNUSED_RESULT"
+    else:
+        return "void"
+
+def accessor_return_success(a_type, m_type):
+    if loxi_utils.accessor_returns_error(a_type, m_type):
+        return "OF_ERROR_NONE"
+    else:
+        return ""
+
+################################################################
+#
+# Per-file generators, mapped to jump table below
+#
+################################################################
+
+def base_h_gen(out, name):
+    """
+    Generate code for base header file
+    @param out The file handle to write to
+    @param name The name of the file
+    """
+    common_top_matter(out, name)
+    base_h_content(out)
+    gen_object_enum(out)
+    out.write("""
+/****************************************************************
+ *
+ * Experimenter IDs
+ *
+ ****************************************************************/
+
+""")
+    for name, val in of_g.experimenter_name_to_id.items():
+        out.write("#define OF_EXPERIMENTER_ID_%s 0x%08x\n" %
+                  (name.upper(), val))
+
+    out.write("""
+/****************************************************************
+ *
+ * OpenFlow Match version specific and generic defines
+ *
+ ****************************************************************/
+""")
+    c_match.gen_v4_match_compat(out)
+    c_match.gen_match_macros(out)
+    c_match.gen_oxm_defines(out)
+    out.write("\n#endif /* Base header file */\n")
+
+def identifiers_gen(out, filename):
+    """
+    Generate the macros for LOCI identifiers
+    @param out The file handle to write to
+    @param filename The name of the file
+    """
+    common_top_matter(out, filename)
+    out.write("""
+/**
+ * For each identifier from an OpenFlow header file, a Loxi version
+ * of the identifier is generated.  For example, ofp_port_flood becomes
+ * OF_PORT_DEST_FLOOD.  Loxi provides the following macros related to 
+ * OpenFlow identifiers (using OF_IDENT_ as an example below):
+ *     OF_IDENT_BY_VERSION(version) Get the value for the specific version
+ *     OF_IDENT_SUPPORTED(version) Boolean: Is OF_IDENT defined for version
+ *     OF_IDENT The common value across all versions if defined
+ *     OF_IDENT_GENERIC A unique value across all OF identifiers
+ *
+ * For identifiers marked as flags, the following are also defined
+ *     OF_IDENT_SET(flags, version)
+ *     OF_IDENT_CLEAR(flags, version)
+ *     OF_IDENT_TEST(flags, version)
+ *
+ * Notes:
+ *
+ *     OF_IDENT_BY_VERSION(version) returns an undefined value
+ * if the passed version does not define OF_IDENT.  It does not generate an
+ * error, nor record anything to the log file.  If the value is the same
+ * across all defined versions, the version is ignored.
+ *
+ *     OF_IDENT is only defined if the value is the same across all
+ * target LOXI versions FOR WHICH IT IS DEFINED.  No error checking is
+ * done.  This allows code to be written without requiring the version
+ * to be known or referenced when it doesn't matter.  It does mean
+ * that when porting to a new version of OpenFlow, compile errors may
+ * occur.  However, this is an indication that the existing code must
+ * be updated to account for a change in the semantics with the newly
+ * supported OpenFlow version.
+ *
+ * @fixme Currently we do not handle multi-bit flags or field values; for
+ * example, OF_TABLE_CONFIG_TABLE_MISS_CONTROLLER is the meaning for
+ * a zero value in the bits indicated by OF_TABLE_CONFIG_TABLE_MISS_MASK.
+ *
+ * @fixme Need to decide (or make a code gen option) on the requirement
+ * for defining OF_IDENT:  Is it that all target versions define it and
+ * the agree?  Or only that the versions which define it agree?
+ */
+""")
+
+    # Build value-by-version parameters and c_code
+    if len(of_g.target_version_list) > 1: # Supporting more than one version
+        vbv_params = []
+        vbv_code = ""
+        first = True
+        for version in of_g.target_version_list:
+            vbv_params.append("value_%s" % of_g.short_version_names[version])
+            if not first:
+                vbv_code += "\\\n     "
+            else:
+                first = False
+            last_value = "value_%s" % of_g.short_version_names[version]
+            vbv_code += "((version) == %s) ? (%s) : " % \
+                (of_g.of_version_wire2name[version], last_value)
+        # @todo Using last value, can optimize out last ?
+        vbv_code += "(%s)" % last_value
+
+    out.write("""
+/**
+ * @brief True for the special case of all versions supported
+ */
+#define OF_IDENT_IN_ALL_VERSIONS 1 /* Indicates identifier in all versions */
+
+/**
+ * @brief General macro to map version to value where values given as params
+ *
+ * If unknown version is passed, use the latest version's value
+ */
+#define OF_VALUE_BY_VERSION(version, %s) \\
+    (%s)
+
+/**
+ * @brief Generic set a flag
+ */
+#define OF_FLAG_SET(flags, mask) (flags) = (flags) | (mask)
+
+/**
+ * @brief Generic test if a flag is set
+ */
+#define OF_FLAG_CLEAR(flags, mask) (flags) = (flags) & ~(mask)
+
+/**
+ * @brief Generic test if a flag is set
+ */
+#define OF_FLAG_TEST(flags, mask) ((flags) & (mask) ? 1 : 0)
+
+/**
+ * @brief Set a flag where the value is an enum indication of bit shift
+ */
+#define OF_FLAG_ENUM_SET(flags, e_val) OF_FLAG_SET(flags, 1 << (e_val))
+
+/**
+ * @brief Clear a flag where the value is an enum indication of bit shift
+ */
+#define OF_FLAG_ENUM_CLEAR(flags, e_val) OF_FLAG_CLEAR(flags, 1 << (e_val))
+
+/**
+ * @brief Test a flag where the value is an enum indication of bit shift
+ */
+#define OF_FLAG_ENUM_TEST(flags, e_val) OF_FLAG_TEST(flags, 1 << (e_val))
+""" % (", ".join(vbv_params), vbv_code))
+
+    # For each group of identifiers, bunch ident defns
+    count = 1
+    keys = of_g.identifiers_by_group.keys()
+    keys.sort()
+    for group in keys:
+        idents = of_g.identifiers_by_group[group]
+        idents.sort()
+        out.write("""
+/****************************************************************
+ * Identifiers from %s 
+ *****************************************************************/
+""" % group)
+        for ident in idents:
+            info = of_g.identifiers[ident]
+
+            keys = info["values_by_version"].keys()
+            keys.sort()
+
+            out.write("""
+/*
+ * Defines for %(ident)s
+ * Original name %(ofp_name)s
+ */
+""" % dict(ident=ident, ofp_name=info["ofp_name"]))
+
+            # Generate supported versions macro
+            if len(keys) == len(of_g.target_version_list): # Defined for all
+                out.write("""\
+#define %(ident)s_SUPPORTED(version) OF_IDENT_IN_ALL_VERSIONS
+""" % dict(ident=ident))
+            else: # Undefined for some version
+                sup_list = []
+                for version in keys:
+                    sup_list.append("((version) == %s)" %
+                                    of_g.of_version_wire2name[version])
+                out.write("""\
+#define %(ident)s_SUPPORTED(version)      \\
+    (%(sup_str)s)
+""" % dict(ident=ident, sup_str=" || \\\n     ".join(sup_list)))
+
+            # Generate value macro
+            if identifiers.defined_versions_agree(of_g.identifiers,
+                                                  of_g.target_version_list,
+                                                  ident):
+                out.write("""\
+#define %(ident)s (%(value)s)
+#define %(ident)s_BY_VERSION(version) (%(value)s)
+""" % dict(ident=ident,value=info["common_value"]))
+            else: # Values differ between versions
+                # Generate version check and value by version
+                val_list = []
+                # Order of params matters
+                for version in of_g.target_version_list:
+                    if version in info["values_by_version"]:
+                        value = info["values_by_version"][version]
+                    else:
+                        value = identifiers.UNDEFINED_IDENT_VALUE
+                    val_list.append("%s" % value)
+                out.write("""\
+#define %(ident)s_BY_VERSION(version)     \\
+    OF_VALUE_BY_VERSION(version, %(val_str)s)
+""" % dict(ident=ident, val_str=", ".join(val_list)))
+            if flags.ident_is_flag(ident):
+                log("Treating %s as a flag" % ident)
+                out.write("""
+#define %(ident)s_SET(flags, version)     \\
+    OF_FLAG_SET(flags, %(ident)s_BY_VERSION(version))
+#define %(ident)s_TEST(flags, version)    \\
+    OF_FLAG_TEST(flags, %(ident)s_BY_VERSION(version))
+#define %(ident)s_CLEAR(flags, version)   \\
+    OF_FLAG_CLEAR(flags, %(ident)s_BY_VERSION(version))
+""" % dict(ident=ident))
+
+            out.write("#define %(ident)s_GENERIC %(count)d\n"
+                      % dict(ident=ident, count=count))
+            count += 1 # This count should probably be promoted higher
+
+    log("Generated %d identifiers" % (count - 1))
+    out.write("\n#endif /* Loci identifiers header file */\n")
+
+def base_h_external(out, filename):
+    """
+    Copy contents of external file to base header
+
+    The contents of the filename are copied literally into the
+    out file handler.  This allows openflow common defines to
+    be entered into the LoxiGen code base.  The content of this
+    code must depend only on standard C headers.
+    """
+    infile = open(filename, "r")
+    contents = infile.read()
+    out.write(contents)
+    infile.close()
+
+def match_h_gen(out, name):
+    """
+    Generate code for
+    @param out The file handle to write to
+    @param name The name of the file
+    """
+    c_match.match_h_top_matter(out, name)
+    c_match.gen_incompat_members(out)
+    c_match.gen_match_struct(out)
+    c_match.gen_match_comp(out)
+#    c_match.gen_match_accessors(out)
+    out.write("\n#endif /* Match header file */\n")
+
+def top_h_gen(out, name):
+    """
+    Generate code for
+    @param out The file handle to write to
+    @param name The name of the file
+    """
+    external_h_top_matter(out, name)
+    out.write("""
+
+typedef enum loci_log_level {
+    LOCI_LOG_LEVEL_TRACE,
+    LOCI_LOG_LEVEL_VERBOSE,
+    LOCI_LOG_LEVEL_INFO,
+    LOCI_LOG_LEVEL_WARN,
+    LOCI_LOG_LEVEL_ERROR,
+    LOCI_LOG_LEVEL_MSG
+} loci_log_level_t;
+
+/**
+ * @brief Output a log message.
+ * @param level The log level.
+ * @param fname The function name.
+ * @param file The file name.
+ * @param line The line number.
+ * @param format The message format string.
+ */
+typedef int (*loci_logger_f)(loci_log_level_t level,
+                             const char *fname, const char *file, int line,
+                             const char *format, ...);
+
+/*
+ * This variable should be set by the user of the library to redirect logs to
+ * their log infrastructure. The default drops all logs.
+ */
+extern loci_logger_f loci_logger;
+
+/**
+ * Map a generic object to the underlying wire buffer
+ *
+ * Treat as private
+ */
+#define OF_OBJECT_TO_MESSAGE(obj) \\
+    ((of_message_t)(WBUF_BUF((obj)->wire_object.wbuf)))
+
+/**
+ * Macro for the fixed length part of an object
+ *
+ * @param obj The object whose extended length is being calculated
+ * @returns length in bytes of non-variable part of the object
+ */
+#define OF_OBJECT_FIXED_LENGTH(obj) \\
+    (of_object_fixed_len[(obj)->version][(obj)->object_id])
+
+/**
+ * Return the length of the object beyond its fixed length
+ *
+ * @param obj The object whose extended length is being calculated
+ * @returns length in bytes of non-variable part of the object
+ *
+ * Most variable length fields are alone at the end of a structure.
+ * Their length is a simple calculation, just the total length of
+ * the parent minus the length of the non-variable part of the
+ * parent's class type.
+ */
+
+#define OF_OBJECT_VARIABLE_LENGTH(obj) \\
+    ((obj)->length - OF_OBJECT_FIXED_LENGTH(obj))
+
+/* FIXME: Where do these go? */
+/* Low level maps btwn wire version + type and object ids */
+extern int of_message_is_stats_request(int type, int w_ver);
+extern int of_message_is_stats_reply(int type, int w_ver);
+extern int of_message_stats_reply_to_object_id(int stats_type, int w_ver);
+extern int of_message_stats_request_to_object_id(int stats_type, int w_ver);
+extern int of_message_type_to_object_id(int type, int w_ver);
+
+extern int of_wire_buffer_of_match_get(of_object_t *obj, int offset,
+                                    of_match_t *match);
+extern int of_wire_buffer_of_match_set(of_object_t *obj, int offset,
+                                    of_match_t *match, int cur_len);
+extern void of_extension_object_id_set(of_object_t *obj, of_object_id_t id);
+""")
+
+    # gen_base_types(out)
+
+    gen_struct_typedefs(out)
+    gen_acc_pointer_typedefs(out)
+    gen_new_function_declarations(out)
+    if config_check("gen_unified_fns"):
+        gen_accessor_declarations(out)
+
+    gen_common_struct_definitions(out)
+    gen_flow_add_setup_function_declarations(out)
+    if config_check("gen_fn_ptrs"): # Otherwise, all classes are from generic cls
+        gen_struct_definitions(out)
+    gen_generic_union(out)
+    gen_generics(out)
+    gen_top_static_functions(out)
+    out.write("""
+/****************************************************************
+ *
+ * Declarations of maps between on-the-wire type values and LOCI identifiers
+ *
+ ****************************************************************/
+""")
+    c_type_maps.gen_type_maps_header(out)
+    c_type_maps.gen_type_data_header(out)
+    c_match.gen_declarations(out)
+    # @fixme Move debug stuff to own fn
+    out.write("""
+/**
+ * Macro to check consistency of length for top level objects
+ *
+ * If the object has no parent then its length should match the
+ * underlying wire buffer's current bytes.
+ */
+#define OF_LENGTH_CHECK_ASSERT(obj) \\
+    ASSERT(((obj)->parent != NULL) || \\
+     ((obj)->wire_object.wbuf == NULL) || \\
+     (WBUF_CURRENT_BYTES((obj)->wire_object.wbuf) == (obj)->length))
+
+#define OF_DEBUG_DUMP
+#if defined(OF_DEBUG_DUMP)
+extern void dump_match(of_match_t *match);
+#endif /* OF_DEBUG_DUMP */
+""")
+
+    out.write("\n#endif /* Top header file */\n")
+
+def match_c_gen(out, name):
+    """
+    Generate code for
+    @param out The file handle to write to
+    @param name The name of the file
+    """
+    c_match.match_c_top_matter(out, name)
+    c_match.gen_match_conversions(out)
+    c_match.gen_serialize(out)
+    c_match.gen_deserialize(out)
+
+def gen_len_offset_macros(out):
+    """
+    Special case length and offset calculations put directly into
+    loci.c as they are private.
+    """
+
+    out.write("""
+/****************************************************************
+ * Special case macros for calculating variable lengths and offsets
+ ****************************************************************/
+
+/**
+ * Get a u16 directly from an offset in an object's wire buffer
+ * @param obj An of_object_t object
+ * @param offset Base offset of the uint16 relative to the object
+ *
+ */
+
+static inline int
+of_object_u16_get(of_object_t *obj, int offset) {
+    uint16_t val16;
+
+    of_wire_buffer_u16_get(obj->wire_object.wbuf,
+        obj->wire_object.obj_offset + offset, &val16);
+
+    return (int)val16;
+}
+
+/**
+ * Set a u16 directly at an offset in an object's wire buffer
+ * @param obj An of_object_t object
+ * @param offset Base offset of the uint16 relative to the object
+ * @param val The value to store
+ *
+ */
+
+static inline void
+of_object_u16_set(of_object_t *obj, int offset, int value) {
+    uint16_t val16;
+
+    val16 = (uint16_t)value;
+    of_wire_buffer_u16_set(obj->wire_object.wbuf,
+        obj->wire_object.obj_offset + offset, val16);
+}
+
+/**
+ * Get length of an object with a TLV header with uint16_t
+ * @param obj An object with a match member
+ * @param offset The wire offset of the start of the object
+ *
+ * The length field follows the type field.
+ */
+
+#define _TLV16_LEN(obj, offset) \\
+    (of_object_u16_get((of_object_t *)(obj), (offset) + 2))
+
+/**
+ * Get length of an object that is the "rest" of the object
+ * @param obj An object with a match member
+ * @param offset The wire offset of the start of the object
+ *
+ */
+
+#define _END_LEN(obj, offset) ((obj)->length - (offset))
+
+/**
+ * Get length of the action list object in a packet_out object
+ * @param obj An object of type of_packet_out
+ *
+ * The length field is just before the end of the fixed length
+ * part of the object in all versions.
+ */
+
+#define _PACKET_OUT_ACTION_LEN(obj) \\
+    (of_object_u16_get((of_object_t *)(obj), \\
+     of_object_fixed_len[(obj)->version][OF_PACKET_OUT] - 2))
+
+/**
+ * Set length of the action list object in a packet_out object
+ * @param obj An object of type of_packet_out
+ *
+ * The length field is just before the end of the fixed length
+ * part of the object in all versions.
+ */
+
+#define _PACKET_OUT_ACTION_LEN_SET(obj, len) \\
+    (of_object_u16_set((of_object_t *)(obj), \\
+     of_object_fixed_len[(obj)->version][OF_PACKET_OUT] - 2, len))
+
+/*
+ * Match structs in 1.2 come at the end of the fixed length part
+ * of structures.  They add 8 bytes to the minimal length of the
+ * message, but are also variable length.  This means that the 
+ * type/length offsets are 8 bytes back from the end of the fixed 
+ * length part of the object.  The right way to handle this is to 
+ * expose the offset of the match member more explicitly.  For now, 
+ * we make the calculation as described here.
+ */
+
+/* 1.2 min length of match is 8 bytes */
+#define _MATCH_MIN_LENGTH_V3 8
+
+/**
+ * The offset of a 1.2 match object relative to fixed length of obj
+ */
+#define _MATCH_OFFSET_V3(fixed_obj_len) \\
+    ((fixed_obj_len) - _MATCH_MIN_LENGTH_V3)
+
+/**
+ * The "extra" length beyond the minimal 8 bytes of a match struct
+ * in an object
+ */
+#define _MATCH_EXTRA_LENGTH_V3(obj, fixed_obj_len) \\
+    (OF_MATCH_BYTES(_TLV16_LEN(obj, _MATCH_OFFSET_V3(fixed_obj_len))) - \\
+     _MATCH_MIN_LENGTH_V3)
+
+/**
+ * The offset of an object following a match object for 1.2
+ */
+#define _OFFSET_FOLLOWING_MATCH_V3(obj, fixed_obj_len) \\
+    ((fixed_obj_len) + _MATCH_EXTRA_LENGTH_V3(obj, fixed_obj_len))
+
+/**
+ * Get length of a match object from its wire representation
+ * @param obj An object with a match member
+ * @param match_offset The wire offset of the match object.
+ *
+ * See above; for 1.2, 
+ * The match length is raw bytes but the actual space it takes
+ * up is padded for alignment to 64-bits
+ */
+#define _WIRE_MATCH_LEN(obj, match_offset) \\
+    (((obj)->version == OF_VERSION_1_0) ? %(match1)d : \\
+     (((obj)->version == OF_VERSION_1_1) ? %(match2)d : \\
+      _TLV16_LEN(obj, match_offset)))
+
+#define _WIRE_LEN_MIN 4
+
+/*
+ * Wrapper function for match len.  There are cases where the wire buffer
+ * has not been set with the proper minimum length.  In this case, the
+ * wire match len is interpretted as its minimum length, 4 bytes.
+ */
+
+static inline int
+wire_match_len(of_object_t *obj, int match_offset) {
+    int len;
+
+    len = _WIRE_MATCH_LEN(obj, match_offset);
+
+    return (len == 0) ? _WIRE_LEN_MIN : len;
+}
+
+#define _WIRE_MATCH_PADDED_LEN(obj, match_offset) \\
+    OF_MATCH_BYTES(wire_match_len((of_object_t *)(obj), (match_offset)))
+
+/**
+ * Macro to calculate variable offset of instructions member in flow mod
+ * @param obj An object of some type of flow modify/add/delete
+ *
+ * Get length of preceding match object and add to fixed length
+ * Applies only to version 1.2
+ */
+
+#define _FLOW_MOD_INSTRUCTIONS_OFFSET(obj) \\
+    _OFFSET_FOLLOWING_MATCH_V3(obj, %(flow_mod)d)
+
+/* The different flavors of flow mod all use the above */
+#define _FLOW_ADD_INSTRUCTIONS_OFFSET(obj) \\
+    _FLOW_MOD_INSTRUCTIONS_OFFSET(obj)
+#define _FLOW_MODIFY_INSTRUCTIONS_OFFSET(obj) \\
+    _FLOW_MOD_INSTRUCTIONS_OFFSET(obj)
+#define _FLOW_MODIFY_STRICT_INSTRUCTIONS_OFFSET(obj) \\
+    _FLOW_MOD_INSTRUCTIONS_OFFSET(obj)
+#define _FLOW_DELETE_INSTRUCTIONS_OFFSET(obj) \\
+    _FLOW_MOD_INSTRUCTIONS_OFFSET(obj)
+#define _FLOW_DELETE_STRICT_INSTRUCTIONS_OFFSET(obj) \\
+    _FLOW_MOD_INSTRUCTIONS_OFFSET(obj)
+
+/**
+ * Macro to calculate variable offset of instructions member in flow stats
+ * @param obj An object of type of_flow_mod_t
+ *
+ * Get length of preceding match object and add to fixed length
+ * Applies only to version 1.2 and 1.3
+ */
+
+#define _FLOW_STATS_ENTRY_INSTRUCTIONS_OFFSET(obj) \\
+    _OFFSET_FOLLOWING_MATCH_V3(obj, %(flow_stats)d)
+
+/**
+ * Macro to calculate variable offset of data (packet) member in packet_in
+ * @param obj An object of type of_packet_in_t
+ *
+ * Get length of preceding match object and add to fixed length
+ * Applies only to version 1.2 and 1.3
+ */
+
+#define _PACKET_IN_DATA_OFFSET(obj) \\
+    _OFFSET_FOLLOWING_MATCH_V3((obj), (obj)->version == OF_VERSION_1_2 ? \
+%(packet_in)d : %(packet_in_1_3)d)
+
+/**
+ * Macro to calculate variable offset of data (packet) member in packet_out
+ * @param obj An object of type of_packet_out_t
+ *
+ * Find the length in the actions_len variable and add to the fixed len
+ * Applies only to version 1.2 and 1.3
+ */
+
+#define _PACKET_OUT_DATA_OFFSET(obj) (_PACKET_OUT_ACTION_LEN(obj) + \\
+     of_object_fixed_len[(obj)->version][OF_PACKET_OUT])
+
+/**
+ * Macro to map port numbers that changed across versions
+ * @param port The port_no_t variable holding the value
+ * @param ver The OpenFlow version from which the value was extracted
+ */
+#define OF_PORT_NO_VALUE_CHECK(port, ver) \\
+    if (((ver) == OF_VERSION_1_0) && ((port) > 0xff00)) (port) += 0xffff0000
+
+""" % dict(flow_mod=of_g.base_length[("of_flow_modify",of_g.VERSION_1_2)],
+           packet_in=of_g.base_length[("of_packet_in",of_g.VERSION_1_2)],
+           packet_in_1_3=of_g.base_length[("of_packet_in",of_g.VERSION_1_3)],
+           flow_stats=of_g.base_length[("of_flow_stats_entry",
+                                        of_g.VERSION_1_2)],
+           match1=of_g.base_length[("of_match_v1",of_g.VERSION_1_0)],
+           match2=of_g.base_length[("of_match_v2",of_g.VERSION_1_1)]))
+
+def gen_obj_id_macros(out):
+    """
+    Flow modify (add, delete) messages (and maybe others) use ID checks allowing
+    inheritance to use common accessor functions.
+    """
+    out.write("""
+/**
+ * Macro to detect if an object ID falls in the "flow mod" family of objects
+ * This includes add, modify, modify_strict, delete and delete_strict
+ */
+#define IS_FLOW_MOD_SUBTYPE(object_id)                 \\
+    (((object_id) == OF_FLOW_MODIFY) ||                \\
+     ((object_id) == OF_FLOW_MODIFY_STRICT) ||         \\
+     ((object_id) == OF_FLOW_DELETE) ||                \\
+     ((object_id) == OF_FLOW_DELETE_STRICT) ||         \\
+     ((object_id) == OF_FLOW_ADD))
+""")
+
+
+def top_c_gen(out, name):
+    """
+    Generate code for
+    @param out The file handle to write to
+    @param name The name of the file
+    """
+    common_top_matter(out, name)
+    # Generic C code that needs to go into loci.c can go here.
+    out.write("""
+/****************************************************************
+ *
+ * This file is divided into the following sections.
+ *
+ * Instantiate strings such as object names
+ * Special case macros for low level object access
+ * Per-class, per-member accessor definitions
+ * Per-class new/init function definitions
+ * Per-class new/init pointer instantiations
+ * Instantiate "set map" for pointer set fns
+ *
+ ****************************************************************/
+
+#include <loci/loci.h>
+#include <loci/of_object.h>
+#include "loci_log.h"
+
+""")
+    gen_object_enum_str(out)
+    gen_len_offset_macros(out)
+    gen_obj_id_macros(out)
+    if config_check("gen_unified_fns"):
+        gen_accessor_definitions(out)
+    gen_new_function_definitions(out)
+    gen_init_map(out)
+    out.write("\n/* This code should be broken out to a different file */\n")
+    gen_setup_from_add_fns(out)
+
+def type_data_c_gen(out, name):
+    common_top_matter(out, name)
+    c_type_maps.gen_type_maps(out)
+    c_type_maps.gen_length_array(out)
+
+################################################################
+# Top Matter
+################################################################
+
+def common_top_matter(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""\
+/****************************************************************
+ * File: %s
+ *
+ * DO NOT EDIT
+ *
+ * This file is automatically generated
+ *
+ ****************************************************************/
+
+""" % name)
+
+    if name[-2:] == ".h":
+        out.write("""
+#if !defined(%(h)s)
+#define %(h)s
+
+""" % dict(h=h_file_to_define(name)))
+
+def base_h_content(out):
+    """
+    Generate base header file content
+
+    @param out The output file object
+    """
+
+    # @fixme Supported version should be generated based on input to LoxiGen
+
+    out.write("""
+/*
+ * Base OpenFlow definitions.  These depend only on standard C headers
+ */
+#include <string.h>
+#include <stdint.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+#include <stdlib.h>
+#include <assert.h>
+#include <loci/loci_idents.h>
+
+/**
+ * Macro to enable debugging for LOCI.
+ *
+ * This enables debug output to stdout.
+ */
+#define OF_DEBUG_ENABLE
+
+#if defined(OF_DEBUG_ENABLE)
+#include <stdio.h> /* Currently for debugging */
+#define FIXME(str) do {                 \\
+        fprintf(stderr, "%s\\n", str);  \\
+        exit(1);                        \\
+    } while (0)
+#define debug printf
+#else
+#define FIXME(str)
+#define debug(str, ...)
+#endif /* OF_DEBUG_ENABLE */
+
+/**
+ * The type of a function used by the LOCI dump/show functions to
+ * output text. Essentially the same signature as fprintf. May
+ * be called many times per invocation of e.g. of_object_show().
+ */
+typedef int (*loci_writer_f)(void *cookie, const char *fmt, ...);
+
+/**
+ * Check if a version is supported
+ */
+#define OF_VERSION_OKAY(v) ((v) >= OF_VERSION_1_0 && (v) <= OF_VERSION_1_3)
+
+""")
+    gen_version_enum(out)
+    out.write("\n")
+
+    # for c_name in of_g.ofp_constants:
+    #     val = str(of_g.ofp_constants[c_name])
+    #     out.write("#define %s %s\n" % (c_name, val))
+    # out.write("\n")
+
+    out.write("""
+typedef enum of_error_codes_e {
+    OF_ERROR_NONE        = 0,
+    OF_ERROR_RESOURCE    = -1,    /* Could not allocate space */
+    OF_ERROR_PARAM       = -2,    /* Bad parameter */
+    OF_ERROR_VERSION     = -3,    /* Version not supported */
+    OF_ERROR_RANGE       = -4,    /* End of list indication */
+    OF_ERROR_COMPAT      = -5,    /* Incompatible assignment */
+    OF_ERROR_PARSE       = -6,    /* Error in parsing data */
+    OF_ERROR_INIT        = -7,    /* Uninitialized data */
+    OF_ERROR_UNKNOWN     = -8     /* Unknown error */
+} of_error_codes_t;
+
+#define OF_ERROR_STRINGS "none", \\
+    "resource", \\
+    "parameter", \\
+    "version", \\
+    "range", \\
+    "incompatible", \\
+    "parse", \\
+    "init", \\
+    "unknown"
+
+extern const char *of_error_strings[];
+
+/* #define ASSERT(val) assert(val) */
+#define FORCE_FAULT *(volatile int *)0 = 1
+#define ASSERT(val) if (!(val)) \\
+    fprintf(stderr, "\\nASSERT %s. %s:%d\\n", #val, __FILE__, __LINE__), \\
+    FORCE_FAULT
+
+/*
+ * Some LOCI object accessors can fail, and it's easy to forget to check.
+ * On certain compilers we can trigger a warning if the error code
+ * is ignored.
+ */
+#ifndef DISABLE_WARN_UNUSED_RESULT
+#ifdef __GNUC__
+#define WARN_UNUSED_RESULT __attribute__ ((warn_unused_result))
+#else
+#define WARN_UNUSED_RESULT
+#endif
+#else
+#define WARN_UNUSED_RESULT
+#endif
+
+typedef union of_generic_u of_generic_t;
+typedef struct of_object_s of_object_t;
+
+/* Define ipv4 address as uint32 */
+typedef uint32_t of_ipv4_t;
+
+/* Table ID is the OF standard uint8 */
+typedef uint8_t of_table_id_t;
+
+#define OF_MAC_ADDR_BYTES 6
+typedef struct of_mac_addr_s {
+   uint8_t addr[OF_MAC_ADDR_BYTES];
+} of_mac_addr_t;
+
+#define OF_IPV6_BYTES 16
+typedef struct of_ipv6_s {
+   uint8_t addr[OF_IPV6_BYTES];
+} of_ipv6_t;
+
+extern const of_mac_addr_t of_mac_addr_all_ones;
+extern const of_mac_addr_t of_mac_addr_all_zeros;
+
+extern const of_ipv6_t of_ipv6_all_ones;
+extern const of_ipv6_t of_ipv6_all_zeros;
+
+/**
+ * Generic zero and all-ones values of size 16 bytes.
+ *
+ * IPv6 is longest data type we worry about for comparisons
+ */
+#define of_all_zero_value of_ipv6_all_zeros
+#define of_all_ones_value of_ipv6_all_ones
+
+/**
+ * Non-zero/all ones check for arbitrary type of size <= 16 bytes
+ */
+#define OF_VARIABLE_IS_NON_ZERO(_ptr) \\
+    (MEMCMP(&of_all_zero_value, (_ptr), sizeof(*(_ptr))))
+#define OF_VARIABLE_IS_ALL_ONES(_ptr) \\
+    (!MEMCMP(&of_all_ones_value, (_ptr), sizeof(*(_ptr))))
+
+/* The octets object is a struct holding pointer and length */
+typedef struct of_octets_s {
+    uint8_t *data;
+    int bytes;
+} of_octets_t;
+
+/* Macro to convert an octet object to a pointer; currently trivial */
+#define OF_OCTETS_POINTER_GET(octet_ptr) ((octet_ptr)->data)
+#define OF_OCTETS_POINTER_SET(octet_ptr, ptr) (octet_ptr)->data = (ptr)
+#define OF_OCTETS_BYTES_GET(octet_ptr) ((octet_ptr)->bytes)
+#define OF_OCTETS_BYTES_SET(octet_ptr, bytes) (octet_ptr)->bytes = (bytes)
+
+/* Currently these are categorized as scalars */
+typedef char of_port_name_t[OF_MAX_PORT_NAME_LEN];
+typedef char of_table_name_t[OF_MAX_TABLE_NAME_LEN];
+typedef char of_desc_str_t[OF_DESC_STR_LEN];
+typedef char of_serial_num_t[OF_SERIAL_NUM_LEN];
+
+/* These are types which change across versions.  */
+typedef uint32_t of_port_no_t;
+typedef uint16_t of_fm_cmd_t;
+typedef uint64_t of_wc_bmap_t;
+typedef uint64_t of_match_bmap_t;
+
+#define MEMMOVE(dest, src, bytes) memmove(dest, src, bytes)
+#define MEMSET(dest, val, bytes) memset(dest, val, bytes)
+#define MEMCPY(dest, src, bytes) memcpy(dest, src, bytes)
+#define MEMCMP(a, b, bytes) memcmp(a, b, bytes)
+#define MALLOC(bytes) malloc(bytes)
+#define FREE(ptr) free(ptr)
+
+/** Try an operation and return on failure. */
+#define OF_TRY(op) do {                                                      \\
+        int _rv;                                                             \\
+        if ((_rv = (op)) < 0) {                                              \\
+            LOCI_LOG_ERROR("ERROR %d at %s:%d\\n", _rv, __FILE__, __LINE__); \\
+            return _rv;                                                      \\
+        }                                                                    \\
+    } while (0)
+
+/* The extent of an OF match object is determined by its length field, but
+ * aligned to 8 bytes
+ */
+
+#define OF_MATCH_BYTES(length) (((length) + 7) & 0xfff8)
+
+#if __BYTE_ORDER == __BIG_ENDIAN
+#define U16_NTOH(val) (val)
+#define U32_NTOH(val) (val)
+#define U64_NTOH(val) (val)
+#define IPV6_NTOH(dst, src) /* NOTE different syntax; currently no-op */
+#define U16_HTON(val) (val)
+#define U32_HTON(val) (val)
+#define U64_HTON(val) (val)
+#define IPV6_HTON(dst, src) /* NOTE different syntax; currently no-op */
+#else /* Little Endian */
+#define U16_NTOH(val) (((val) >> 8) | ((val) << 8))
+#define U32_NTOH(val) ((((val) & 0xff000000) >> 24) |                   \\
+                       (((val) & 0x00ff0000) >>  8) |                   \\
+                       (((val) & 0x0000ff00) <<  8) |                   \\
+                       (((val) & 0x000000ff) << 24))
+#define U64_NTOH(val) ((((val) & 0xff00000000000000LL) >> 56) |         \\
+                       (((val) & 0x00ff000000000000LL) >> 40) |         \\
+                       (((val) & 0x0000ff0000000000LL) >> 24) |         \\
+                       (((val) & 0x000000ff00000000LL) >>  8) |         \\
+                       (((val) & 0x00000000ff000000LL) <<  8) |         \\
+                       (((val) & 0x0000000000ff0000LL) << 24) |         \\
+                       (((val) & 0x000000000000ff00LL) << 40) |         \\
+                       (((val) & 0x00000000000000ffLL) << 56))
+#define IPV6_NTOH(dst, src) /* NOTE different syntax; currently no-op */
+#define U16_HTON(val) U16_NTOH(val)
+#define U32_HTON(val) U32_NTOH(val)
+#define U64_HTON(val) U64_NTOH(val)
+#define IPV6_HTON(dst, src) /* NOTE different syntax; currently no-op */
+#endif
+
+/****************************************************************
+ *
+ * The following are internal definitions used by the automatically
+ * generated code.  Users should not reference these definitions
+ * as they may change between versions of this code
+ *
+ ****************************************************************/
+
+#define OF_MESSAGE_IN_MATCH_POINTER(obj)                            \\
+    (WIRE_BUF_POINTER(&((obj)->wire_buffer), OF_MESSAGE_IN_MATCH_OFFSET))
+#define OF_MESSAGE_IN_MATCH_LEN(ptr) BUF_U16_GET(&ptr[2])
+#define OF_MESSAGE_IN_DATA_OFFSET(obj) \\
+    (FIXED_LEN + OF_MESSAGE_IN_MATCH_LEN(OF_MESSAGE_IN_MATCH_POINTER(obj)) + 2)
+
+#define OF_MESSAGE_OUT_DATA_OFFSET(obj) \\
+    (FIXED_LEN + of_message_out_actions_len_get(obj))
+
+""")
+
+def external_h_top_matter(out, name):
+    """
+    Generate top matter for external header file
+
+    @param name The name of the output file
+    @param out The output file object
+    """
+    common_top_matter(out, name)
+    out.write("""
+#include <loci/loci_base.h>
+#include <loci/of_message.h>
+#include <loci/of_match.h>
+#include <loci/of_object.h>
+#include <loci/of_wire_buf.h>
+
+/****************************************************************
+ *
+ * This file is divided into the following sections.
+ *
+ * A few object specific macros
+ * Class typedefs (no struct definitions)
+ * Per-data type accessor function typedefs
+ * Per-class new/delete function typedefs
+ * Per-class static delete functions
+ * Per-class, per-member accessor declarations
+ * Per-class structure definitions
+ * Generic union (inheritance) definitions
+ * Pointer set function declarations
+ * Some special case macros
+ *
+ ****************************************************************/
+""")
+
+def gen_top_static_functions(out):
+    out.write("""
+
+#define _MAX_PARENT_ITERATIONS 4
+/**
+ * Iteratively update parent lengths thru hierarchy
+ * @param obj The object whose length is being updated
+ * @param delta The difference between the current and new lengths
+ *
+ * Note that this includes updating the object itself.  It will
+ * iterate thru parents.
+ *
+ * Assumes delta > 0.
+ */
+static inline void
+of_object_parent_length_update(of_object_t *obj, int delta)
+{
+    int count = 0;
+    of_wire_buffer_t *wbuf;  /* For debug asserts only */
+
+    while (obj != NULL) {
+        ASSERT(count++ < _MAX_PARENT_ITERATIONS);
+        obj->length += delta;
+        if (obj->wire_length_set != NULL) {
+            obj->wire_length_set(obj, obj->length);
+        }
+        wbuf = obj->wire_object.wbuf;
+
+        /* Asserts for wire length checking */
+        ASSERT(obj->length + obj->wire_object.obj_offset <=
+               WBUF_CURRENT_BYTES(wbuf));
+        if (obj->parent == NULL) {
+            ASSERT(obj->length + obj->wire_object.obj_offset ==
+                   WBUF_CURRENT_BYTES(wbuf));
+        }
+
+        obj = obj->parent;
+    }
+}
+""")
+
+################################################################
+#
+################################################################
+
+def gen_version_enum(out):
+    """
+    Generate the enumerated type for versions in LoxiGen
+    @param out The file object to which to write the decs
+
+    This just uses the wire versions for now
+    """
+    out.write("""
+/**
+ * Enumeration of OpenFlow versions
+ *
+ * The wire protocol numbers are currently used for values of the corresponding
+ * version identifiers.
+ */
+typedef enum of_version_e {
+    OF_VERSION_UNKNOWN = 0,
+""")
+
+    is_first = True
+    max = 0
+    for v in of_g.wire_ver_map:
+        if is_first:
+            is_first = False
+        else:
+            out.write(",\n")
+        if v > max:
+            max = v
+        out.write("    %s = %d" % (of_g.wire_ver_map[v], v))
+
+    out.write("""
+} of_version_t;
+
+/**
+ * @brief Use this when declaring arrays indexed by wire version
+ */
+#define OF_VERSION_ARRAY_MAX %d
+""" % (max + 1))
+    
+def gen_object_enum(out):
+    """
+    Generate the enumerated type for object identification in LoxiGen
+    @param out The file object to which to write the decs
+    """
+    out.write("""
+
+/**
+ * Enumeration of OpenFlow objects
+ *
+ * We enumerate the OpenFlow objects used internally.  Note that some
+ * message types are determined both by an outer type (message type like
+ * stats_request) and an inner type (port stats).  These are different
+ * messages in ofC.
+ *
+ * These values are for internal use only.  They will change with
+ * different versions of ofC.
+ */
+
+typedef enum of_object_id_e {
+    /* Root object type */
+    OF_OBJECT_INVALID = -1, /* "invalid" return value for mappings */
+    OF_OBJECT = 0, /* Generic, untyped object */
+
+    /* OpenFlow message objects */
+""")
+    last = 0
+    msg_count = 0
+    for cls in of_g.ordered_messages:
+        out.write("    %s = %d,\n" % (enum_name(cls),
+                                   of_g.unified[cls]["object_id"]))
+        msg_count = of_g.unified[cls]["object_id"] + 1
+
+    out.write("\n    /* Non-message objects */\n")
+    for cls in of_g.ordered_non_messages:
+        out.write("    %s = %d,\n" % (enum_name(cls),
+                                   of_g.unified[cls]["object_id"]))
+        last = of_g.unified[cls]["object_id"]
+    out.write("\n    /* List objects */\n")
+    for cls in of_g.ordered_list_objects:
+        out.write("    %s = %d,\n" % (enum_name(cls),
+                                   of_g.unified[cls]["object_id"]))
+        last = of_g.unified[cls]["object_id"]
+
+    out.write("\n    /* Generic stats request/reply types; pseudo objects */\n")
+    for cls in of_g.ordered_pseudo_objects:
+        out.write("    %s = %d,\n" % (enum_name(cls),
+                                   of_g.unified[cls]["object_id"]))
+        last = of_g.unified[cls]["object_id"]
+
+    out.write("""
+    OF_OBJECT_COUNT = %d
+} of_object_id_t;
+
+extern const char *of_object_id_str[];
+
+#define OF_MESSAGE_OBJECT_COUNT %d
+""" % ((last + 1), msg_count))
+
+    # Generate object type range checking for inheritance classes
+
+    # @fixme These should be determined algorithmicly
+    out.write("""
+/*
+ * Macros to check if an object ID is within an inheritance class range
+ */
+""")
+    # Alphabetical order for 'last'
+    last_ids = dict(of_action="OF_ACTION_STRIP_VLAN",
+                    of_oxm="OF_OXM_VLAN_VID_MASKED",
+                    of_instruction="OF_INSTRUCTION_WRITE_METADATA",
+                    of_queue_prop="OF_QUEUE_PROP_MIN_RATE",
+                    of_table_feature_prop="OF_TABLE_FEATURE_PROP_WRITE_SETFIELD_MISS",
+                    # @FIXME add meter_band ?
+                    )
+    for cls, last in last_ids.items():
+        out.write("""
+#define %(enum)s_FIRST_ID      (%(enum)s + 1)
+#define %(enum)s_LAST_ID       %(last)s
+#define %(enum)s_VALID_ID(id) \\
+    ((id) >= %(enum)s_FIRST_ID && \\
+     (id) <= %(enum)s_LAST_ID)
+""" % dict(enum=enum_name(cls), last=last))
+    out.write("""
+/**
+ * Function to check a wire ID
+ * @param object_id The ID to check
+ * @param base_object_id The inheritance parent, if applicable
+ * @returns boolean: If base_object_id is an inheritance class, check if
+ * object_id is valid as a subclass.  Otherwise return 1.
+ *
+ * Note: Could check that object_id == base_object_id in the
+ * second case.
+ */
+static inline int
+of_wire_id_valid(int object_id, int base_object_id) {
+    switch (base_object_id) {
+    case OF_ACTION:
+        return OF_ACTION_VALID_ID(object_id);
+    case OF_OXM:
+        return OF_OXM_VALID_ID(object_id);
+    case OF_QUEUE_PROP:
+        return OF_QUEUE_PROP_VALID_ID(object_id);
+    case OF_TABLE_FEATURE_PROP:
+        return OF_TABLE_FEATURE_PROP_VALID_ID(object_id);
+    case OF_INSTRUCTION:
+        return OF_INSTRUCTION_VALID_ID(object_id);
+    default:
+        break;
+    }
+    return 1;
+}
+""")
+
+def gen_object_enum_str(out):
+    out.write("\nconst char *of_object_id_str[] = {\n")
+    out.write("    \"of_object\",\n")
+    for cls in of_g.ordered_messages:
+        out.write("    \"%s\",\n" % cls)
+    out.write("\n    /* Non-message objects */\n")
+    for cls in of_g.ordered_non_messages:
+        out.write("    \"%s\",\n" % cls)
+    out.write("\n    /* List objects */\n")
+    for cls in of_g.ordered_list_objects:
+        out.write("    \"%s\",\n" % cls)
+    out.write("\n    /* Generic stats request/reply types; pseudo objects */\n")
+    for cls in of_g.ordered_pseudo_objects:
+        out.write("    \"%s\",\n" % cls)
+    out.write("\n    \"of_unknown_object\"\n};\n")
+
+    # We'll do version strings while we're at it
+    out.write("""
+ const char *of_version_str[] = {
+    "Unknown OpenFlow Version",
+    "OpenFlow-1.0",
+    "OpenFlow-1.1",
+    "OpenFlow-1.2"
+};
+
+const of_mac_addr_t of_mac_addr_all_ones = {
+    {
+        0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+    }
+};
+/* Just to be explicit; static duration vars are init'd to 0 */
+const of_mac_addr_t of_mac_addr_all_zeros = {
+    {
+        0, 0, 0, 0, 0, 0
+    }
+};
+
+const of_ipv6_t of_ipv6_all_ones = {
+    {
+        0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+        0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+    }
+};
+/* Just to be explicit; static duration vars are init'd to 0 */
+const of_ipv6_t of_ipv6_all_zeros = {
+    {
+        0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0
+    }
+};
+
+/** @var of_error_strings
+ * The error string map; use abs value to index
+ */
+const char *of_error_strings[] = { OF_ERROR_STRINGS };
+""")
+
+################################################################
+#
+# Internal Utility Functions
+#
+################################################################
+
+
+def acc_name(cls, m_name):
+    """
+    Generate the root name of an accessor function for typedef
+    @param cls The class name
+    @param m_name The member name
+    """
+    (m_type, get_rv) = get_acc_rv(cls, m_name)
+    return "%s_%s" % (cls, m_type)
+
+def get_acc_rv(cls, m_name):
+    """
+    Determine the data type and return type for a get accessor.
+
+    The return type may be "void" or it may be the accessor type
+    depending on the system configuration and on the data type.
+
+    @param cls The class name
+    @param m_name The member name
+    @return A pair (m_type, rv) where m_type is the unified type of the
+    member and rv is the get_accessor return type
+    """
+    member = of_g.unified[cls]["union"][m_name]
+    m_type = member["m_type"]
+    rv = "int"
+    if member_returns_val(cls, m_name):
+        rv = m_type
+    if m_type[-2:] == "_t":
+        m_type = m_type[:-2]
+
+    return (m_type, rv)
+
+def param_list(cls, m_name, a_type):
+    """
+    Generate the parameter list (no parens) for an a_type accessor
+    @param cls The class name
+    @param m_name The member name
+    @param a_type One of "set" or "get" or TBD
+    """
+    member = of_g.unified[cls]["union"][m_name]
+    m_type = member["m_type"]
+    params = ["%s_t *obj" % cls]
+    if a_type == "set":
+        if loxi_utils.type_is_scalar(m_type):
+            params.append("%s %s" % (m_type, m_name))
+        else:
+            params.append("%s *%s" % (m_type, m_name))
+    elif a_type in ["get", "bind"]:
+        params.append("%s *%s" % (m_type, m_name))
+    else:
+        debug("Class %s, name %s Bad param list a_type: %s" %
+            (cls, m_name, a_type))
+        sys.exit(1)
+    return params
+
+def typed_function_base(cls, m_name):
+    """
+    Generate the core name for accessors based on the type
+    @param cls The class name
+    @param m_name The member name
+    """
+    (m_type, get_rv) = get_acc_rv(cls, m_name)
+    return "%s_%s" % (cls, m_type)
+
+def member_function_base(cls, m_name):
+    """
+    Generate the core name for accessors based on the member name
+    @param cls The class name
+    @param m_name The member name
+    """
+    return "%s_%s" % (cls, m_name)
+
+def field_ver_get(cls, m_name):
+    """
+    Generate a dict, indexed by wire version, giving a pair (type, offset)
+
+    @param cls The class name
+    @param m_name The name of the class member
+
+    If offset is not known for m_name, the type
+    A dict is used for more convenient indexing.
+    """
+    result = {}
+
+    for ver in of_g.unified[cls]:
+        if type(ver) == type(0): # It's a version
+            if "use_version" in of_g.unified[cls][ver]: # deref version ref
+                ref_ver = of_g.unified[cls][ver]["use_version"]
+                members = of_g.unified[cls][ref_ver]["members"]
+            else:
+                members = of_g.unified[cls][ver]["members"]
+            idx = loxi_utils.member_to_index(m_name, members)
+            if (idx < 0):
+                continue # Member not in this version
+            m_type = members[idx]["m_type"]
+            offset = members[idx]["offset"]
+
+            # If m_type is mixed, get wire version type from global data
+            if m_type in of_g.of_mixed_types and \
+                    ver in of_g.of_mixed_types[m_type]:
+                m_type = of_g.of_mixed_types[m_type][ver]
+
+            # add version to result list
+            result[ver] = (m_type, offset)
+
+    return result
+
+def v3_match_offset_get(cls):
+    """
+    Return the offset of an OF 1.2 match in an object if it has such; 
+    otherwise return -1
+    """
+    result = field_ver_get(cls, "match")
+    if of_g.VERSION_1_2 in result:
+        if result[of_g.VERSION_1_2][0] == "of_match_v3_t":
+            return result[of_g.VERSION_1_2][1]
+    return -1
+
+################################################################
+#
+# OpenFlow Object Definitions
+#
+################################################################
+
+
+def gen_of_object_defs(out):
+    """
+    Generate low level of_object core operations
+    @param out The file for output, already open
+    """
+
+def gen_generics(out):
+    for (cls, subclasses) in type_maps.inheritance_map.items():
+        out.write("""
+/**
+ * Inheritance super class for %(cls)s
+ *
+ * This class is the union of %(cls)s classes.  You can refer
+ * to it untyped by refering to the member 'header' whose structure
+ * is common across all sub-classes.
+ */
+
+union %(cls)s_u {
+    %(cls)s_header_t header; /* Generic instance */
+""" % dict(cls=cls))
+        for subcls in sorted(subclasses):
+            out.write("    %s_%s_t %s;\n" % (cls, subcls, subcls))
+        out.write("};\n")
+
+def gen_struct_typedefs(out):
+    """
+    Generate typedefs for all struct objects
+    @param out The file for output, already open
+    """
+
+    out.write("\n/* LOCI inheritance parent typedefs */\n")
+    for cls in type_maps.inheritance_map:
+        out.write("typedef union %(cls)s_u %(cls)s_t;\n" % dict(cls=cls))
+    out.write("\n/* LOCI object typedefs */\n")
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue
+        if config_check("gen_fn_ptrs"):
+            out.write("typedef struct %(cls)s_s %(cls)s_t;\n" % dict(cls=cls))
+        else:
+            template = "typedef of_object_t %(cls)s_t;\n"
+            out.write(template % dict(cls=cls))
+
+    out.write("""
+/****************************************************************
+ *
+ * Additional of_object defines
+ * These are needed for some static inline ops, so we put them here.
+ *
+ ****************************************************************/
+
+/* Delete an OpenFlow object without reference to its type */
+extern void of_object_delete(of_object_t *obj);
+
+""")
+
+def gen_generic_union(out):
+    """
+    Generate the generic union object composing all LOCI objects
+
+    @param out The file to which to write the decs
+    """
+    out.write("""
+/**
+ * The common LOCI object is a union of all possible objects.
+ */
+union of_generic_u {
+    of_object_t object;  /* Common base class with fundamental accessors */
+
+    /* Message objects */
+""")
+    for cls in of_g.ordered_messages:
+        out.write("    %s_t %s;\n" % (cls, cls))
+    out.write("\n    /* Non-message composite objects */\n")
+    for cls in of_g.ordered_non_messages:
+        if cls in type_maps.inheritance_map:
+            continue
+        out.write("    %s_t %s;\n" % (cls, cls))
+    out.write("\n    /* List objects */\n")
+    for cls in of_g.ordered_list_objects:
+        out.write("    %s_t %s;\n" % (cls, cls))
+    out.write("};\n")
+
+def gen_common_struct_definitions(out):
+    out.write("""
+/****************************************************************
+ *
+ * Unified structure definitions
+ *
+ ****************************************************************/
+
+struct of_object_s {
+    /* Common members */
+%(common)s
+};
+""" % dict(common=of_g.base_object_members))
+
+def gen_flow_add_setup_function_declarations(out):
+    """
+    Add the declarations for functions that can be initialized
+    by a flow add.  These are defined external to LOXI.
+    """
+
+    out.write("""
+/****************************************************************
+ * Functions for objects that can be initialized by a flow add message.
+ * These are defined in a non-autogenerated file
+ ****************************************************************/
+
+/**
+ * @brief Set up a flow removed message from the original add
+ * @param obj The flow removed message being updated
+ * @param flow_add The flow_add message to use
+ *
+ * Initialize the following fields of obj to be identical
+ * to what was originally on the wire from the flow_add object:
+ *     match
+ *     cookie
+ *     priority
+ *     idle_timeout
+ *     hard_timeout
+ *
+ */
+
+extern int
+of_flow_removed_setup_from_flow_add(of_flow_removed_t *obj,
+                                    of_flow_add_t *flow_add);
+
+
+/**
+ * @brief Set up the packet in match structure from the original add
+ * @param obj The packet in message being updated
+ * @param flow_add The flow_add message to use
+ * @returns Indigo error code.  Does not return a version error if
+ * the version does not require initializing obj.
+ *
+ * Initialize the match member of obj to be identical to what was originally
+ * on the wire from the flow_add object.  If applicable, the table ID is also
+ * initialized from the flow_add object.
+ *
+ * This API applies to 1.2 and later only.
+ */
+
+extern int
+of_packet_in_setup_from_flow_add(of_packet_in_t *obj,
+                                 of_flow_add_t *flow_add);
+
+
+/**
+ * @brief Set up the flow stats entry from the original add
+ * @param obj The packet in message being updated
+ * @param flow_add The flow_add message to use
+ * @param effects Optional actions or instructions; see below.
+ *
+ * Initialize the following fields of obj to be identical
+ * to what was originally on the wire from the flow_add object:
+ *     match
+ *     actions/instructions (effects)
+ *     cookie
+ *     priority
+ *     idle_timeout
+ *     hard_timeout
+ *
+ * Note that the actions/instructions of a flow may be modified by a 
+ * subsequent flow modify message.  To facilitate implementations,
+ * the "effects" parameter is provided.  If effects is NULL, the
+ * actions/instructions are taken from the flow_add message.
+ * Otherwise, effects is coerced to the proper type (actions or
+ * instructions) and used to init obj.
+ */
+
+extern int
+of_flow_stats_entry_setup_from_flow_add(of_flow_stats_entry_t *obj,
+                                        of_flow_add_t *flow_add,
+                                        of_object_t *effects);
+""")
+
+def gen_struct_definitions(out):
+    """
+    Generate the declaration of all of_ C structures
+
+    @param out The file to which to write the decs
+    """
+
+    # This should only get called if gen_fn_ptr is true in code_gen_config
+    if not config_check("gen_fn_ptrs"):
+        debug("Error: gen_struct_defs called, but no fn ptrs set")
+        return
+
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue # These are generated elsewhere
+        note = ""
+        if loxi_utils.class_is_message(cls):
+            note = " /* Class is message */"
+        out.write("struct %s_s {%s\n" % (cls, note))
+        out.write("""    /* Common members */
+%s
+    /* Class specific members */
+""" % of_g.base_object_members)
+        if loxi_utils.class_is_list(cls):
+            out.write("""
+    %(cls)s_first_f first;
+    %(cls)s_next_f next;
+    %(cls)s_append_bind_f append_bind;
+    %(cls)s_append_f append;
+};
+
+""" % {"cls": cls})
+            continue   # All done with list object
+
+        # Else, not a list instance; add accessors for all data members
+        for m_name in of_g.ordered_members[cls]:
+            if m_name in of_g.skip_members:
+                # These members (length, etc) are handled internally
+                continue
+            f_name = acc_name(cls, m_name)
+            out.write("    %s_get_f %s;\n" % (f_name, m_name + "_get"))
+            out.write("    %s_set_f %s;\n" % (f_name, m_name + "_set"))
+        out.write("};\n\n")
+
+
+################################################################
+#
+# List accessor code generation
+#
+# Currently these all implement copy on read semantics
+#
+################################################################
+
+def init_call(e_type, obj, ver, length, cw):
+    """
+    Generate the init call given the strings for params
+    """
+    hdr = "" # If inheritance type, coerce to hdr object
+    obj_name = obj
+    if e_type in type_maps.inheritance_map:
+        hdr = "_header"
+        obj_name = "(%s_header_t *)" % e_type + obj
+
+    return """\
+%(e_type)s%(hdr)s_init(%(obj_name)s,
+            %(ver)s, %(length)s, %(cw)s)\
+""" % dict(e_type=e_type, hdr=hdr, obj_name=obj_name, ver=ver,
+           length=length, cw=cw)
+
+def gen_list_first(out, cls, e_type):
+    """
+    Generate the body of a list_first operation
+    @param cls The class name for which code is being generated
+    @param e_type The element type of the list
+    @param out The file to which to write
+    """
+    i_call = init_call(e_type, "obj", "list->version", "0", "1")
+    if e_type in type_maps.inheritance_map:
+        len_str = "obj->header.length"
+    else:
+        len_str = "obj->length"
+
+    out.write("""
+/**
+ * Associate an iterator with a list
+ * @param list The list to iterate over
+ * @param obj The list entry iteration pointer
+ * @return OF_ERROR_RANGE if the list is empty (end of list)
+ *
+ * The obj instance is completely initialized.  The caller is responsible
+ * for cleaning up any wire buffers associated with obj before this call
+ */
+
+int
+%(cls)s_first(%(cls)s_t *list,
+    %(e_type)s_t *obj)
+{
+    int rv;
+
+    %(i_call)s;
+    if ((rv = of_list_first((of_object_t *)list, (of_object_t *)obj)) < 0) {
+        return rv;
+    }
+""" % dict(cls=cls, e_type=e_type, i_call=i_call))
+
+    # Special case flow_stats_entry lists
+
+    out.write("""
+    of_object_wire_init((of_object_t *) obj, %(u_type)s,
+                        list->length);
+    if (%(len_str)s == 0) {
+        return OF_ERROR_PARSE;
+    }
+
+    return rv;
+}
+""" % dict(cls=cls, e_type=e_type, u_type=enum_name(e_type), len_str=len_str))
+
+
+def gen_bind(out, cls, m_name, m_type):
+    """
+    Generate the body of a bind function
+    @param out The file to which to write
+    @param cls The class name for which code is being generated
+    @param m_name The name of the data member
+    @param m_type The type of the data member
+    """
+
+    bparams = ",\n    ".join(param_list(cls, m_name, "bind"))
+
+    i_call = init_call(e_type, "child", "parent->version", "0", "1")
+
+    out.write("""
+/**
+ * Bind the child object to the parent object for read processing
+ * @param parent The parent object
+ * @param child The child object
+ *
+ * The child obj instance is completely initialized.
+ */
+
+int
+%(cls)s_%(m_name)_bind(%(cls)s_t *parent,
+    %(e_type)s_t *child)
+{
+    int rv;
+
+    %(i_call)s;
+
+    /* Derive offset and length of child in parent */
+    OF_TRY(of_object_child_attach(parent, child, 
+    if ((rv = of_list_first((of_object_t *)list, (of_object_t *)obj)) < 0) {
+        return rv;
+    }
+""" % dict(cls=cls, e_type=e_type, i_call=i_call))
+
+    # Special case flow_stats_entry lists
+
+    out.write("""
+    rv = of_object_wire_init((of_object_t *) obj, %(u_type)s,
+                               list->length);
+    if ((rv == OF_ERROR_NONE) && (%(len_str)s == 0)) {
+        return OF_ERROR_PARSE;
+    }
+
+    return rv;
+}
+""" % dict(cls=cls, e_type=e_type, u_type=enum_name(e_type), len_str=len_str))
+
+
+def gen_list_next(out, cls, e_type):
+    """
+    Generate the body of a list_next operation
+    @param cls The class name for which code is being generated
+    @param e_type The element type of the list
+    @param out The file to which to write
+    """
+
+    if e_type in type_maps.inheritance_map:
+        len_str = "obj->header.length"
+    else:
+        len_str = "obj->length"
+        
+    out.write("""
+/**
+ * Advance an iterator to the next element in a list
+ * @param list The list being iterated
+ * @param obj The list entry iteration pointer
+ * @return OF_ERROR_RANGE if already at the last entry on the list
+ *
+ */
+
+int
+%(cls)s_next(%(cls)s_t *list,
+    %(e_type)s_t *obj)
+{
+    int rv;
+
+    if ((rv = of_list_next((of_object_t *)list, (of_object_t *)obj)) < 0) {
+        return rv;
+    }
+
+    rv = of_object_wire_init((of_object_t *) obj, %(u_type)s,
+        list->length);
+
+    if ((rv == OF_ERROR_NONE) && (%(len_str)s == 0)) {
+        return OF_ERROR_PARSE;
+    }
+
+    return rv;
+}
+""" % dict(cls=cls, e_type=e_type, u_type=enum_name(e_type), len_str=len_str))
+
+def gen_list_append(out, cls, e_type):
+    """
+    Generate the body of a list append functions
+    @param cls The class name for which code is being generated
+    @param e_type The element type of the list
+    @param out The file to which to write
+    """
+
+    out.write("""
+/**
+ * Set up to append an object of type %(e_type)s to an %(cls)s.
+ * @param list The list that is prepared for append
+ * @param obj Pointer to object to hold data to append
+ *
+ * The obj instance is completely initialized.  The caller is responsible
+ * for cleaning up any wire buffers associated with obj before this call.
+ *
+ * See the generic documentation for of_list_append_bind.
+ */
+
+int
+%(cls)s_append_bind(%(cls)s_t *list,
+    %(e_type)s_t *obj)
+{
+    return of_list_append_bind((of_object_t *)list, (of_object_t *)obj);
+}
+
+/**
+ * Append an item to a %(cls)s list.
+ *
+ * This copies data from item and leaves item untouched.
+ *
+ * See the generic documentation for of_list_append.
+ */
+
+int
+%(cls)s_append(%(cls)s_t *list,
+    %(e_type)s_t *item)
+{
+    return of_list_append((of_object_t *)list, (of_object_t *)item);
+}
+
+""" % dict(cls=cls, e_type=e_type))
+
+def gen_list_accessors(out, cls):
+    e_type = loxi_utils.list_to_entry_type(cls)
+    gen_list_first(out, cls, e_type)
+    gen_list_next(out, cls, e_type)
+    gen_list_append(out, cls, e_type)
+
+################################################################
+#
+# Accessor Functions
+#
+################################################################
+
+    
+def gen_accessor_declarations(out):
+    """
+    Generate the declaration of each version independent accessor
+
+    @param out The file to which to write the decs
+    """
+
+    out.write("""
+/****************************************************************
+ *
+ * Unified, per-member accessor function declarations
+ *
+ ****************************************************************/
+""")
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue
+        out.write("\n/* Unified accessor functions for %s */\n" % cls)
+        for m_name in of_g.ordered_members[cls]:
+            if m_name in of_g.skip_members:
+                continue
+            m_type = loxi_utils.member_base_type(cls, m_name)
+            base_name = "%s_%s" % (cls, m_name)
+            gparams = ",\n    ".join(param_list(cls, m_name, "get"))
+            get_ret_type = accessor_return_type("get", m_type)
+            sparams = ",\n    ".join(param_list(cls, m_name, "set"))
+            set_ret_type = accessor_return_type("set", m_type)
+            bparams = ",\n    ".join(param_list(cls, m_name, "bind"))
+            bind_ret_type = accessor_return_type("bind", m_type)
+
+            if loxi_utils.type_is_of_object(m_type):
+                # Generate bind accessors, but not get accessor
+                out.write("""
+extern %(set_ret_type)s %(base_name)s_set(
+    %(sparams)s);
+extern %(bind_ret_type)s %(base_name)s_bind(
+    %(bparams)s);
+extern %(m_type)s *%(cls)s_%(m_name)s_get(
+    %(cls)s_t *obj);
+""" % dict(base_name=base_name, sparams=sparams, bparams=bparams,
+           m_name=m_name, m_type=m_type, cls=cls,
+           set_ret_type=set_ret_type, bind_ret_type=bind_ret_type))
+            else:
+                out.write("""
+extern %(set_ret_type)s %(base_name)s_set(
+    %(sparams)s);
+extern %(get_ret_type)s %(base_name)s_get(
+    %(gparams)s);
+""" % dict(base_name=base_name, gparams=gparams, sparams=sparams,
+           get_ret_type=get_ret_type, set_ret_type=set_ret_type))
+            
+        if loxi_utils.class_is_list(cls):
+            e_type = loxi_utils.list_to_entry_type(cls)
+            out.write("""
+extern int %(cls)s_first(
+    %(cls)s_t *list, %(e_type)s_t *obj);
+extern int %(cls)s_next(
+    %(cls)s_t *list, %(e_type)s_t *obj);
+extern int %(cls)s_append_bind(
+    %(cls)s_t *list, %(e_type)s_t *obj);
+extern int %(cls)s_append(
+    %(cls)s_t *list, %(e_type)s_t *obj);
+
+/**
+ * Iteration macro for list of type %(cls)s
+ * @param list Pointer to the list being iterated over of
+ * type %(cls)s
+ * @param elt Pointer to an element of type %(e_type)s
+ * @param rv On exiting the loop will have the value OF_ERROR_RANGE.
+ */
+#define %(u_cls)s_ITER(list, elt, rv)  \\
+    for ((rv) = %(cls)s_first((list), (elt));   \\
+         (rv) == OF_ERROR_NONE;   \\
+         (rv) = %(cls)s_next((list), (elt)))
+""" % dict(u_cls=cls.upper(), cls=cls, e_type=e_type))
+
+
+def wire_accessor(m_type, a_type):
+    """
+    Returns the name of the a_type accessor for low level wire buff offset
+    @param m_type The member type
+    @param a_type The accessor type (set or get)
+    """
+    # Strip off _t if present
+    if m_type in of_g.of_base_types:
+        m_type = of_g.of_base_types[m_type]["short_name"]
+    if m_type in of_g.of_mixed_types:
+        m_type = of_g.of_mixed_types[m_type]["short_name"]
+    if m_type[-2:] == "_t":
+        m_type = m_type[:-2]
+    if m_type == "octets":
+        m_type = "octets_data"
+    return "of_wire_buffer_%s_%s" % (m_type, a_type)
+
+def get_len_macro(cls, m_type, version):
+    """
+    Get the length macro for m_type in cls
+    """
+    if m_type.find("of_match") == 0:
+        return "_WIRE_MATCH_PADDED_LEN(obj, offset)"
+    if m_type.find("of_list_oxm") == 0:
+        return "wire_match_len(obj, 0) - 4"
+    if loxi_utils.class_is_tlv16(m_type):
+        return "_TLV16_LEN(obj, offset)"
+    if cls == "of_packet_out" and m_type == "of_list_action_t":
+        return "_PACKET_OUT_ACTION_LEN(obj)"
+    # Default is everything to the end of the object
+    return "_END_LEN(obj, offset)"
+
+def gen_accessor_offsets(out, cls, m_name, version, a_type, m_type, offset):
+    """
+    Generate the sub-object offset and length calculations for accessors
+    @param out File being written
+    @param m_name Name of member
+    @param version Wire version being processed
+    @param a_type The accessor type (set or get)
+    @param m_type The original member type
+    @param offset The offset of the object or -1 if not fixed
+    """
+    # determine offset
+    o_str = "%d" % offset  # Default is fixed length
+    if offset == -1:
+        # There are currently 4 special cases for this
+        # In general, get offset and length of predecessor
+        if (loxi_utils.cls_is_flow_mod(cls) and m_name == "instructions"):
+            pass
+        elif (cls == "of_flow_stats_entry" and m_name == "instructions"):
+            pass
+        elif (cls == "of_packet_in" and m_name == "data"):
+            pass
+        elif (cls == "of_packet_out" and m_name == "data"):
+            pass
+        else:
+            debug("Error: Unknown member with offset == -1")
+            debug("  cls %s, m_name %s, version %d" % (cls, m_name, version))
+            sys.exit(1)
+        o_str = "_%s_%s_OFFSET(obj)" % (cls.upper()[3:], m_name.upper())
+
+    out.write("""\
+        offset = %s;
+""" % o_str);
+
+    # This could be moved to main body but for version check
+    if not loxi_utils.type_is_scalar(m_type):
+        if loxi_utils.class_is_var_len(m_type[:-2], version) or \
+                m_type == "of_match_t":
+            len_macro = get_len_macro(cls, m_type, version)
+        else:
+            len_macro = "%d" % of_g.base_length[(m_type[:-2], version)]
+        out.write("        cur_len = %s;\n" % len_macro)
+    out.write("        break;\n")
+
+def length_of(m_type, version):
+    """
+    Return the length of a type based on the version
+    """
+    if m_type in of_g.of_mixed_types:
+        m_type = of_g.of_mixed_types[m_type][version]
+    if m_type in of_g.of_base_types:
+        return of_g.of_base_types[m_type]["bytes"]
+    if (m_type[:-2], version) in of_g.base_length:
+        return of_g.base_length[(m_type[:-2], version)]
+    print "Unknown length request", m_type, version
+    sys.exit(1)
+        
+
+def gen_get_accessor_body(out, cls, m_type, m_name):
+    """
+    Generate the common operations for a get accessor
+    """
+    if loxi_utils.type_is_scalar(m_type):
+        ver = ""      # See if version required for scalar update
+        if m_type in of_g.of_mixed_types:
+            ver = "ver, "
+        out.write("""\
+    %(wa)s(%(ver)swbuf, abs_offset, %(m_name)s);
+""" % dict(wa=wire_accessor(m_type, "get"), ver=ver, m_name=m_name))
+
+        if m_type == "of_port_no_t":
+           out.write("    OF_PORT_NO_VALUE_CHECK(*%s, ver);\n" % m_name)
+    elif m_type == "of_octets_t":
+        out.write("""\
+    ASSERT(cur_len + abs_offset <= WBUF_CURRENT_BYTES(wbuf));
+    %(m_name)s->bytes = cur_len;
+    %(m_name)s->data = OF_WIRE_BUFFER_INDEX(wbuf, abs_offset);
+""" % dict(m_name=m_name))
+    elif m_type == "of_match_t":
+        out.write("""
+    ASSERT(cur_len + abs_offset <= WBUF_CURRENT_BYTES(wbuf));
+    match_octets.bytes = cur_len;
+    match_octets.data = OF_OBJECT_BUFFER_INDEX(obj, offset);
+    OF_TRY(of_match_deserialize(ver, %(m_name)s, &match_octets));
+""" % dict(m_name=m_name))
+    else:
+        out.write("""
+    /* Initialize child */
+    %(m_type)s_init(%(m_name)s, obj->version, 0, 1);
+    /* Attach to parent */
+    %(m_name)s->parent = (of_object_t *)obj;
+    %(m_name)s->wire_object.wbuf = obj->wire_object.wbuf;
+    %(m_name)s->wire_object.obj_offset = abs_offset;
+    %(m_name)s->wire_object.owned = 0;
+    %(m_name)s->length = cur_len;
+""" % dict(m_type=m_type[:-2], m_name=m_name))
+
+
+def gen_set_accessor_body(out, cls, m_type, m_name):
+    """
+    Generate the contents of a set accessor
+    """
+    if loxi_utils.type_is_scalar(m_type) or m_type == "of_octets_t":
+        ver = "" # See if version required for scalar update
+        if m_type in of_g.of_mixed_types:
+            ver = "ver, "
+        cur_len = "" # See if version required for scalar update
+        if m_type == "of_octets_t":
+            cur_len = ", cur_len"
+            out.write("""\
+    new_len = %(m_name)s->bytes;
+    of_wire_buffer_grow(wbuf, abs_offset + (new_len - cur_len));
+""" % dict(m_name=m_name))
+        out.write("""\
+    %(wa)s(%(ver)swbuf, abs_offset, %(m_name)s%(cur_len)s);
+""" % dict(wa=wire_accessor(m_type, "set"), ver=ver, cur_len=cur_len,
+           m_name=m_name))
+
+    elif m_type == "of_match_t":
+        out.write("""
+    /* Match object */
+    OF_TRY(of_match_serialize(ver, %(m_name)s, &match_octets));
+    new_len = match_octets.bytes;
+    of_wire_buffer_replace_data(wbuf, abs_offset, cur_len,
+        match_octets.data, new_len);
+    /* Free match serialized octets */
+    FREE(match_octets.data);
+""" % dict(m_name=m_name))
+
+    else:  # Other object type
+        out.write("\n    /* LOCI object type */")
+        # Need to special case OXM list
+        out.write("""
+    new_len = %(m_name)s->length;
+    /* If underlying buffer already shared; nothing to do */
+    if (obj->wire_object.wbuf == %(m_name)s->wire_object.wbuf) {
+        of_wire_buffer_grow(wbuf, abs_offset + new_len);
+        /* Verify that the offsets are correct */
+        ASSERT(abs_offset == OF_OBJECT_ABSOLUTE_OFFSET(%(m_name)s, 0));
+        /* ASSERT(new_len == cur_len); */ /* fixme: may fail for OXM lists */
+        return %(ret_success)s;
+    }
+
+    /* Otherwise, replace existing object in data buffer */
+    of_wire_buffer_replace_data(wbuf, abs_offset, cur_len,
+        OF_OBJECT_BUFFER_INDEX(%(m_name)s, 0), new_len);
+""" % dict(m_name=m_name, ret_success=accessor_return_success("set", m_type)))
+
+    if not loxi_utils.type_is_scalar(m_type):
+        if cls == "of_packet_out" and m_type == "of_list_action_t":
+            out.write("""
+    /* Special case for setting action lengths */
+    _PACKET_OUT_ACTION_LEN_SET(obj, %(m_name)s->length);
+""" % dict(m_name=m_name))
+        elif m_type not in ["of_match_t", "of_octets_t"]:
+            out.write("""
+    /* @fixme Shouldn't this precede copying value's data to buffer? */
+    if (%(m_name)s->wire_length_set != NULL) {
+        %(m_name)s->wire_length_set((of_object_t *)%(m_name)s, %(m_name)s->length);
+    }
+""" % dict(m_name=m_name))
+        out.write("""
+    /* Not scalar, update lengths if needed */
+    delta = new_len - cur_len;
+    if (delta != 0) {
+        /* Update parent(s) */
+        of_object_parent_length_update((of_object_t *)obj, delta);
+    }
+""")
+
+def obj_assert_check(cls):
+    """
+    The body of the assert check for an accessor
+    We allow all versions of add/delete/modify to use the same accessors
+    """
+    if cls in ["of_flow_modify", "of_flow_modify_strict",
+               "of_flow_delete", "of_flow_delete_strict",
+               "of_flow_add"]:
+        return "IS_FLOW_MOD_SUBTYPE(obj->object_id)"
+    else:
+        return "obj->object_id == %s" % cls.upper()
+
+def gen_of_object_get(out, cls, m_name, m_type):
+    sub_cls = m_type[:-2]
+    out.write("""
+/**
+ * Create a copy of %(m_name)s into a new variable of type %(m_type)s from 
+ * a %(cls)s instance.
+ *
+ * @param obj Pointer to the source of type %(cls)s_t
+ * @returns A pointer to a new instance of type %(m_type)s whose contents
+ * match that of %(m_name)s from source
+ * @returns NULL if an error occurs
+ */
+%(m_type)s *
+%(cls)s_%(m_name)s_get(%(cls)s_t *obj) {
+    %(m_type)s _%(m_name)s;
+    %(m_type)s *_%(m_name)s_ptr;
+
+    %(cls)s_%(m_name)s_bind(obj, &_%(m_name)s);
+    _%(m_name)s_ptr = (%(m_type)s *)of_object_dup(&_%(m_name)s);
+    return _%(m_name)s_ptr;
+}
+""" % dict(m_name=m_name, m_type=m_type, cls=cls, sub_cls=sub_cls))
+
+def gen_unified_acc_body(out, cls, m_name, ver_type_map, a_type, m_type):
+    """
+    Generate the body of a set or get accessor function
+
+    @param out The file to which to write the decs
+    @param cls The class name
+    @param m_name The member name
+    @param ver_type_map Maps (type, offset) pairs to a list of versions
+    @param a_type The accessor type, set or get
+    @param m_type The original member type
+
+    The type values in ver_type_map are now ignored as we've pushed down
+    the type munging to the lower level.
+
+    This is unified because the version switch case processing is the
+    same for both set and get
+    """
+    out.write("""{
+    of_wire_buffer_t *wbuf;
+    int offset = 0; /* Offset of value relative to the start obj */
+    int abs_offset; /* Offset of value relative to start of wbuf */
+    of_version_t ver;
+""")
+
+    if not loxi_utils.type_is_scalar(m_type):
+        out.write("""\
+    int cur_len = 0; /* Current length of object data */
+""")
+        if a_type == "set":
+            out.write("""\
+    int new_len, delta; /* For set, need new length and delta */
+""")
+
+    # For match, need octet string for set/get
+    if m_type == "of_match_t":
+        out.write("""\
+    of_octets_t match_octets; /* Serialized string for match */
+""")
+
+    out.write("""
+    ASSERT(%(assert_str)s);
+    ver = obj->version;
+    wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    /* By version, determine offset and current length (where needed) */
+    switch (ver) {
+""" % dict(assert_str=obj_assert_check(cls)))
+
+    for first in sorted(ver_type_map):
+        (prev_t, prev_o) = ver_type_map[first]
+        prev_len = length_of(prev_t, first)
+        prev = first
+        out.write("    case %s:\n" % of_g.wire_ver_map[first])
+        break
+
+    for next in sorted(ver_type_map):
+        if next == first:
+            continue
+
+        (t, o) = ver_type_map[next]
+        cur_len = length_of(t, next)
+        if o == prev_o and cur_len == prev_len:
+            out.write("    case %s:\n" % of_g.wire_ver_map[next])
+            continue
+        gen_accessor_offsets(out, cls, m_name, prev, a_type, m_type, prev_o)
+        out.write("    case %s:\n" % of_g.wire_ver_map[next])
+        (prev_t, prev_o, prev_len, prev) = (t, o, cur_len, next)
+
+    gen_accessor_offsets(out, cls, m_name, next, a_type, m_type, prev_o)
+    out.write("""\
+    default:
+        ASSERT(0);
+    }
+
+    abs_offset = OF_OBJECT_ABSOLUTE_OFFSET(obj, offset);
+    ASSERT(abs_offset >= 0);
+""")
+    if not loxi_utils.type_is_scalar(m_type):
+        out.write("    ASSERT(cur_len >= 0 && cur_len < 64 * 1024);\n")
+
+    # Now generate the common accessor code
+    if a_type in ["get", "bind"]:
+        gen_get_accessor_body(out, cls, m_type, m_name)
+    else:
+        gen_set_accessor_body(out, cls, m_type, m_name)
+
+    out.write("""
+    OF_LENGTH_CHECK_ASSERT(obj);
+
+    return %s;
+}
+""" % accessor_return_success(a_type, m_type))
+
+def gen_of_obj_bind(out, cls, m_name, m_type, ver_type_map):
+    """
+    For generating the bind call for OF sub-objects
+    """
+
+    params = ",\n    ".join(param_list(cls, m_name, "bind"))
+    out.write("""
+/**
+ * Bind an object of type %(m_type)s to the parent of type %(cls)s for
+ * member %(m_name)s
+ * @param obj Pointer to an object of type %(cls)s.
+ * @param %(m_name)s Pointer to the child object of type
+ * %(m_type)s to be filled out.
+ * \ingroup %(cls)s
+ *
+ * The parameter %(m_name)s is filled out to point to the same underlying
+ * wire buffer as its parent.
+ *
+ */
+""" % dict(m_name=m_name, cls=cls, m_type=m_type))
+
+    ret_type = accessor_return_type("bind", m_type)
+    out.write("%s\n%s_%s_bind(\n    %s)\n" % (ret_type, cls, m_name, params))
+    gen_unified_acc_body(out, cls, m_name, ver_type_map, "bind", m_type)
+
+def gen_get_accessor(out, cls, m_name, m_type, ver_type_map):
+    """
+    For generating the get call for non- OF sub-objects
+    """
+    params = ",\n    ".join(param_list(cls, m_name, "get"))
+    out.write("""
+/**
+ * Get %(m_name)s from an object of type %(cls)s.
+ * @param obj Pointer to an object of type %(cls)s.
+ * @param %(m_name)s Pointer to the child object of type
+ * %(m_type)s to be filled out.
+ *
+ */
+""" % dict(m_name=m_name, cls=cls, m_type=m_type))
+
+    ret_type =  accessor_return_type("get", m_type)
+    out.write("%s\n%s_%s_get(\n    %s)\n" % (ret_type, cls, m_name, params))
+    gen_unified_acc_body(out, cls, m_name, ver_type_map, "get", m_type)
+
+def gen_accessor_definitions(out):
+    """
+    Generate the body of each version independent accessor
+
+    @param out The file to which to write the decs
+    """
+
+    out.write("""
+/****************************************************************
+ *
+ * Unified accessor function definitions
+ *
+ ****************************************************************/
+""")
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue
+        out.write("\n/* Unified accessor functions for %s */\n" % cls)
+        if loxi_utils.class_is_list(cls):
+            gen_list_accessors(out, cls)
+            continue
+        out.write("/** \\ingroup %s \n * @{ */\n" % cls)
+        for m_name in of_g.ordered_members[cls]:
+            if m_name in of_g.skip_members:
+                continue
+            m_type = loxi_utils.member_base_type(cls, m_name)
+            ver_type_map = field_ver_get(cls, m_name)
+
+            # Generate get/bind pending on member type
+            # FIXME:  Does this do the right thing for match?
+            if loxi_utils.type_is_of_object(m_type):
+                gen_of_obj_bind(out, cls, m_name, m_type, ver_type_map)
+                gen_of_object_get(out, cls, m_name, m_type)
+            else:
+                gen_get_accessor(out, cls, m_name, m_type, ver_type_map)
+
+            # Now generate set accessor for all objects
+            params = ",\n    ".join(param_list(cls, m_name, "set"))
+            out.write("""
+/**
+ * Set %(m_name)s in an object of type %(cls)s.
+ * @param obj Pointer to an object of type %(cls)s.
+""" % dict(m_name=m_name, cls=cls, m_type=m_type))
+            if loxi_utils.type_is_scalar(m_type) or m_type == "of_octets_t":
+                out.write("""\
+ * @param %(m_name)s The value to write into the object
+ */
+""" % dict(m_name=m_name, cls=cls, m_type=m_type))
+            else:
+                out.write("""\
+ * @param %(m_name)s Pointer to the child of type %(m_type)s.
+ *
+ * If the child's wire buffer is the same as the parent's, then
+ * nothing is done as the changes have already been registered in the
+ * parent.  Otherwise, the data in the child's wire buffer is inserted
+ * into the parent's and the appropriate lengths are updated.
+ */
+""" % dict(m_name=m_name, cls=cls, m_type=m_type))
+            ret_type = accessor_return_type("set", m_type)
+            out.write("%s\n%s_%s_set(\n    %s)\n" % (ret_type, cls, m_name, params))
+            gen_unified_acc_body(out, cls, m_name, ver_type_map, "set", m_type)
+
+        out.write("\n/** @} */\n")
+
+def gen_acc_pointer_typedefs(out):
+    """
+    Generate the function pointer typedefs for in-struct accessors
+    @param out The file to which to write the typedefs
+    """
+
+    out.write("""
+/****************************************************************
+ *
+ * Accessor function pointer typedefs
+ *
+ ****************************************************************/
+
+/*
+ * Generic accessors:
+ *
+ * Many objects have a length represented in the wire buffer
+ * wire_length_get and wire_length_set access these values directly on the
+ * wire.
+ *
+ * Many objects have a length represented in the wire buffer
+ * wire_length_get and wire_length_set access these values directly on the
+ * wire.
+ *
+ * FIXME: TBD if wire_length_set and wire_type_set are required.
+ */
+typedef void (*of_wire_length_get_f)(of_object_t *obj, int *bytes);
+typedef void (*of_wire_length_set_f)(of_object_t *obj, int bytes);
+typedef void (*of_wire_type_get_f)(of_object_t *obj, of_object_id_t *id);
+typedef void (*of_wire_type_set_f)(of_object_t *obj, of_object_id_t id);
+""")
+    # If not using function pointers in classes, don't gen typedefs below
+    if not config_check("gen_fn_ptrs"):
+        return
+
+    # For each class, for each type it uses, generate a typedef
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue
+        out.write("\n/* Accessor function pointer typedefs for %s */\n" % cls)
+        types_done = list()
+        for m_name in of_g.ordered_members[cls]:
+            (m_type, get_rv) = get_acc_rv(cls, m_name)
+            if (m_type, get_rv) in types_done:
+                continue
+            types_done.append((m_type, get_rv))
+            fn = "%s_%s" % (cls, m_type)
+            params = ", ".join(param_list(cls, m_name, "get"))
+            out.write("typedef int (*%s_get_f)(\n    %s);\n" %
+                      (fn, params))
+
+            params = ", ".join(param_list(cls, m_name, "set"))
+            out.write("typedef int (*%s_set_f)(\n    %s);\n" %
+                      (fn, params))
+        if loxi_utils.class_is_list(cls):
+            obj_type = loxi_utils.list_to_entry_type(cls)
+            out.write("""typedef int (*%(cls)s_first_f)(
+    %(cls)s_t *list,
+    %(obj_type)s_t *obj);
+typedef int (*%(cls)s_next_f)(
+    %(cls)s_t *list,
+    %(obj_type)s_t *obj);
+typedef int (*%(cls)s_append_bind_f)(
+    %(cls)s_t *list,
+    %(obj_type)s_t *obj);
+typedef int (*%(cls)s_append_f)(
+    %(cls)s_t *list,
+    %(obj_type)s_t *obj);
+""" % {"cls":cls, "obj_type":obj_type})
+
+#             out.write("""
+# typedef int (*%(cls)s_get_f)(
+#     %(cls)s_t *list,
+#     %(obj_type)s_t *obj, int index);
+# typedef int (*%(cls)s_set_f)(
+#     %(cls)s_t *list,
+#     %(obj_type)s_t *obj, int index);
+# typedef int (*%(cls)s_append_f)(
+#     %(cls)s_t *list,
+#     %(obj_type)s_t *obj, int index);
+# typedef int (*%(cls)s_insert_f)(
+#     %(cls)s_t *list,
+#     %(obj_type)s_t *obj, int index);
+# typedef int (*%(cls)s_remove_f)(
+#     %(cls)s_t *list,
+#     int index);
+# """ % {"cls":cls, "obj_type":obj_type})
+
+################################################################
+#
+# New/Delete Function Definitions
+#
+################################################################
+
+
+################################################################
+# First, some utility functions for new/delete
+################################################################
+
+def del_function_proto(cls):
+    """
+    Return the prototype for the delete operator for the given class
+    @param cls The class name
+    """
+    fn = "void\n"
+    return fn
+
+
+def instantiate_fn_ptrs(cls, ilvl, out):
+    """
+    Generate the C code to instantiate function pointers for a class
+    @param cls The class name
+    @param ilvl The base indentation level
+    @param out The file to which to write the functions
+    """
+    for m_name in of_g.ordered_members[cls]:
+        if m_name in of_g.skip_members:
+            continue
+        out.write(" " * ilvl + "obj->%s_get = %s_%s_get;\n" %
+                  (m_name, cls, m_name))
+        out.write(" " * ilvl + "obj->%s_set = %s_%s_set;\n" %
+                  (m_name, cls, m_name))
+
+################################################################
+# Routines to generate the body of new/delete functions
+################################################################
+
+def gen_init_fn_body(cls, out):
+    """
+    Generate function body for init function
+    @param cls The class name for the function
+    @param out The file to which to write
+    """
+    if cls in type_maps.inheritance_map:
+        param = "obj_p"
+    else:
+        param = "obj"
+
+    out.write("""
+/**
+ * Initialize an object of type %(cls)s.
+ *
+ * @param obj Pointer to the object to initialize
+ * @param version The wire version to use for the object
+ * @param bytes How many bytes in the object
+ * @param clean_wire Boolean: If true, clear the wire object control struct
+ *
+ * If bytes < 0, then the default fixed length is used for the object
+ *
+ * This is a "coerce" function that sets up the pointers for the
+ * accessors properly.  
+ *
+ * If anything other than 0 is passed in for the buffer size, the underlying
+ * wire buffer will have 'grow' called.
+ */
+
+void
+%(cls)s_init(%(cls)s_t *%(param)s,
+    of_version_t version, int bytes, int clean_wire)
+{
+""" % dict(cls=cls, param=param))
+
+    # Use an extra pointer to deal with inheritance classes
+    if cls in type_maps.inheritance_map:
+        out.write("""\
+    %s_header_t *obj;
+
+    obj = &obj_p->header;  /* Need instantiable subclass */
+""" % cls)
+
+    out.write("""
+    ASSERT(of_object_fixed_len[version][%(enum)s] >= 0);
+    if (clean_wire) {
+        MEMSET(obj, 0, sizeof(*obj));
+    }
+    if (bytes < 0) {
+        bytes = of_object_fixed_len[version][%(enum)s];
+    }
+    obj->version = version;
+    obj->length = bytes;
+    obj->object_id = %(enum)s;
+""" % dict(cls=cls, enum=enum_name(cls)))
+    gen_coerce_ops(out, cls)
+
+    out.write("""
+    /* Grow the wire buffer */
+    if (obj->wire_object.wbuf != NULL) {
+        int tot_bytes;
+
+        tot_bytes = bytes + obj->wire_object.obj_offset;
+        of_wire_buffer_grow(obj->wire_object.wbuf, tot_bytes);
+    }
+}
+
+""")
+
+## @fixme This should also be updated once there is a map from
+# class instance to wire length/type accessors
+def gen_wire_push_fn(cls, out):
+    """
+    Generate the calls to push values into the wire buffer
+    """
+    if type_maps.class_is_virtual(cls):
+        print "Push fn gen called for virtual class " + cls
+        return
+
+    out.write("""
+/**
+ * Helper function to push values into the wire buffer
+ */
+static inline int
+%(cls)s_push_wire_values(%(cls)s_t *obj)
+{
+""" % dict(cls=cls))
+
+    if loxi_utils.class_is_message(cls):
+        out.write("""
+    /* Message obj; push version, length and type to wire */
+    of_message_t msg;
+
+    if ((msg = OF_OBJECT_TO_MESSAGE(obj)) != NULL) {
+        of_message_version_set(msg, obj->version);
+        of_message_length_set(msg, obj->length);
+        OF_TRY(of_wire_message_object_id_set(OF_OBJECT_TO_WBUF(obj),
+                 %(name)s));
+    }
+""" % dict(name = enum_name(cls)))
+ 
+        for version in of_g.of_version_range:
+            if type_maps.class_is_extension(cls, version):
+                exp_name = type_maps.extension_to_experimenter_macro_name(cls)
+                subtype = type_maps.extension_message_to_subtype(cls, version)
+                if subtype is None or exp_name is None:
+                    print "Error in mapping extension message"
+                    print cls, version
+                    sys.exit(1)
+                out.write("""
+    if (obj->version == %(version)s) {
+        of_message_experimenter_id_set(OF_OBJECT_TO_MESSAGE(obj),
+                                       %(exp_name)s);
+        of_message_experimenter_subtype_set(OF_OBJECT_TO_MESSAGE(obj),
+                                            %(subtype)s);
+    }
+""" % dict(exp_name=exp_name, version=of_g.wire_ver_map[version],
+           subtype=str(subtype)))
+           
+    else: # Not a message
+        if loxi_utils.class_is_tlv16(cls):
+            out.write("""
+    /* TLV obj; set length and type */
+    of_tlv16_wire_length_set((of_object_t *)obj, obj->length);
+    of_tlv16_wire_object_id_set((of_object_t *)obj,
+           %(enum)s);
+""" % dict(enum=enum_name(cls)))
+            # Some tlv16 types may be extensions requiring more work
+            if cls in ["of_action_bsn_mirror", "of_action_id_bsn_mirror",
+                       "of_action_bsn_set_tunnel_dst", "of_action_id_bsn_set_tunnel_dst",
+                       "of_action_nicira_dec_ttl", "of_action_id_nicira_dec_ttl"]:
+                out.write("""
+    /* Extended TLV obj; Call specific accessor */
+    of_extension_object_id_set(obj, %(enum)s);
+""" % dict(cls=cls, enum=enum_name(cls)))
+                
+
+        if loxi_utils.class_is_oxm(cls):
+            out.write("""\
+    /* OXM obj; set length and type */
+    of_oxm_wire_length_set((of_object_t *)obj, obj->length);
+    of_oxm_wire_object_id_set((of_object_t *)obj, %(enum)s);
+""" % dict(enum=enum_name(cls)))
+        if loxi_utils.class_is_u16_len(cls) or cls == "of_packet_queue":
+            out.write("""
+    obj->wire_length_set((of_object_t *)obj, obj->length);
+""")
+
+        if cls == "of_meter_stats":
+            out.write("""
+    of_meter_stats_wire_length_set((of_object_t *)obj, obj->length);
+""" % dict(enum=enum_name(cls)))
+
+    out.write("""
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_new_fn_body(cls, out):
+    """
+    Generate function body for new function
+    @param cls The class name for the function
+    @param out The file to which to write
+    """
+
+    out.write("""
+/**
+ * \\defgroup %(cls)s %(cls)s
+ */
+""" % dict(cls=cls))
+
+    if not type_maps.class_is_virtual(cls):
+        gen_wire_push_fn(cls, out)
+
+    out.write("""
+/**
+ * Create a new %(cls)s object
+ *
+ * @param version The wire version to use for the object
+ * @return Pointer to the newly create object or NULL on error
+ *
+ * Initializes the new object with it's default fixed length associating
+ * a new underlying wire buffer.
+ *
+ * Use new_from_message to bind an existing message to a message object,
+ * or a _get function for non-message objects.
+ *
+ * \\ingroup %(cls)s
+ */
+
+%(cls)s_t *
+%(cls)s_new_(of_version_t version)
+{
+    %(cls)s_t *obj;
+    int bytes;
+
+    bytes = of_object_fixed_len[version][%(enum)s];
+
+    /* Allocate a maximum-length wire buffer assuming we'll be appending to it. */
+    if ((obj = (%(cls)s_t *)of_object_new(OF_WIRE_BUFFER_MAX_LENGTH)) == NULL) {
+        return NULL;
+    }
+
+    %(cls)s_init(obj, version, bytes, 0);
+""" % dict(cls=cls, enum=enum_name(cls)))
+    if not type_maps.class_is_virtual(cls):
+        out.write("""
+    if (%(cls)s_push_wire_values(obj) < 0) {
+        FREE(obj);
+        return NULL;
+    }
+""" % dict(cls=cls))
+
+    match_offset = v3_match_offset_get(cls)
+    if match_offset >= 0:
+        # Init length field for match object
+        out.write("""
+    /* Initialize match TLV for 1.2 */
+    /* FIXME: Check 1.3 below */
+    if ((version == OF_VERSION_1_2) || (version == OF_VERSION_1_3)) {
+        of_object_u16_set((of_object_t *)obj, %(match_offset)d + 2, 4);
+    }
+""" % dict(match_offset=match_offset))
+    out.write("""
+    return obj;
+}
+
+#if defined(OF_OBJECT_TRACKING)
+
+/*
+ * Tracking objects.  Call the new function and then record location
+ */
+
+%(cls)s_t *
+%(cls)s_new_tracking(of_version_t version,
+     const char *file, int line)
+{
+    %(cls)s_t *obj;
+
+    obj = %(cls)s_new_(version);
+    of_object_track((of_object_t *)obj, file, line);
+
+    return obj;
+}
+#endif
+""" % dict(cls=cls))
+
+
+def gen_from_message_fn_body(cls, out):
+    """
+    Generate function body for from_message function
+    @param cls The class name for the function
+    @param out The file to which to write
+    """
+    out.write("""
+/**
+ * Create a new %(cls)s object and bind it to an existing message
+ *
+ * @param msg The message to bind the new object to
+ * @return Pointer to the newly create object or NULL on error
+ *
+ * \ingroup %(cls)s
+ */
+
+%(cls)s_t *
+%(cls)s_new_from_message_(of_message_t msg)
+{
+    %(cls)s_t *obj = NULL;
+    of_version_t version;
+    int length;
+
+    if (msg == NULL) return NULL;
+
+    version = of_message_version_get(msg);
+    if (!OF_VERSION_OKAY(version)) return NULL;
+
+    length = of_message_length_get(msg);
+
+    if ((obj = (%(cls)s_t *)of_object_new(-1)) == NULL) {
+        return NULL;
+    }
+
+    %(cls)s_init(obj, version, 0, 0);
+
+    if ((of_object_buffer_bind((of_object_t *)obj, OF_MESSAGE_TO_BUFFER(msg),
+                               length, OF_MESSAGE_FREE_FUNCTION)) < 0) {
+       FREE(obj);
+       return NULL;
+    }
+    obj->length = length;
+    obj->version = version;
+
+    return obj;
+}
+
+#if defined(OF_OBJECT_TRACKING)
+
+/*
+ * Tracking objects.  Call the new function and then record location
+ */
+
+%(cls)s_t *
+%(cls)s_new_from_message_tracking(of_message_t msg,
+    const char *file, int line)
+{
+    %(cls)s_t *obj;
+
+    obj = %(cls)s_new_from_message_(msg);
+    of_object_track((of_object_t *)obj, file, line);
+
+    return obj;
+}
+#endif
+""" % dict(cls=cls))
+
+
+################################################################
+# Now the top level generator functions
+################################################################
+
+def gen_new_function_declarations(out):
+    """
+    Gerenate the header file declarations for new operators for all classes
+    @param out The file to which to write the decs
+    """
+
+    out.write("""
+/****************************************************************
+ *
+ * New operator declarations
+ *
+ * _new: Create a new object for writing; includes init
+ * _new_from_message: Create a new instance of the object and bind the
+ *    message data to the object
+ * _init: Initialize and optionally allocate buffer space for an
+ *    automatic instance
+ *
+ * _new and _from_message require a delete operation to be called
+ * on the object.
+ *
+ ****************************************************************/
+""")
+    out.write("""
+/*
+ * If object tracking is enabled, map "new" and "new from msg"
+ * calls to tracking versions; otherwise, directly to internal
+ * versions of fns which have the same name but end in _.
+ */
+
+#if defined(OF_OBJECT_TRACKING)
+""")
+    for cls in of_g.standard_class_order:
+        out.write("""
+extern %(cls)s_t *
+    %(cls)s_new_tracking(of_version_t version,
+        const char *file, int line);
+#define %(cls)s_new(version) \\
+    %(cls)s_new_tracking(version, \\
+        __FILE__, __LINE__)
+""" % dict(cls=cls))
+        if loxi_utils.class_is_message(cls):
+            out.write("""extern %(cls)s_t *
+    %(cls)s_new_from_message_tracking(of_message_t msg,
+        const char *file, int line);
+#define %(cls)s_new_from_message(msg) \\
+    %(cls)s_new_from_message_tracking(msg, \\
+        __FILE__, __LINE__)
+""" % dict(cls=cls))
+
+    out.write("""
+#else /* No object tracking */
+""")
+    for cls in of_g.standard_class_order:
+        out.write("""
+#define %(cls)s_new(version) \\
+    %(cls)s_new_(version)
+""" % dict(cls=cls))
+        if loxi_utils.class_is_message(cls):
+            out.write("""#define %(cls)s_new_from_message(msg) \\
+    %(cls)s_new_from_message_(msg)
+""" % dict(cls=cls))
+
+    out.write("""
+#endif /* Object tracking */
+""")
+
+    for cls in of_g.standard_class_order:
+        out.write("""
+extern %(cls)s_t *
+    %(cls)s_new_(of_version_t version);
+""" % dict(cls=cls))
+        if loxi_utils.class_is_message(cls):
+            out.write("""extern %(cls)s_t *
+    %(cls)s_new_from_message_(of_message_t msg);
+""" % dict(cls=cls))
+        out.write("""extern void %(cls)s_init(
+    %(cls)s_t *obj, of_version_t version, int bytes, int clean_wire);
+""" % dict(cls=cls))
+
+    out.write("""
+/****************************************************************
+ *
+ * Delete operator static inline definitions.
+ * These are here for type checking purposes only
+ *
+ ****************************************************************/
+""")
+    for cls in of_g.standard_class_order:
+#        if cls in type_maps.inheritance_map:
+#            continue
+        out.write("""
+/**
+ * Delete an object of type %(cls)s_t
+ * @param obj An instance of type %(cls)s_t
+ *
+ * \ingroup %(cls)s
+ */
+static inline void
+%(cls)s_delete(%(cls)s_t *obj) {
+    of_object_delete((of_object_t *)(obj));
+}
+""" % dict(cls=cls))
+
+    out.write("""
+typedef void (*of_object_init_f)(of_object_t *obj, of_version_t version,
+    int bytes, int clean_wire);
+extern of_object_init_f of_object_init_map[];
+""")
+
+    out.write("""
+/****************************************************************
+ *
+ * Function pointer initialization functions
+ * These are part of the "coerce" type casting for objects
+ *
+ ****************************************************************/
+""")
+
+#
+# @fixme Not clear that these should all be set for virtual fns
+#
+# @fixme Clean up.  should have a (language specific) map from class
+# to length and type get/set functions
+#
+
+def gen_coerce_ops(out, cls):
+    out.write("""
+    /* Set up the object's function pointers */
+""")
+
+    if loxi_utils.class_is_message(cls):
+        out.write("""
+    obj->wire_length_get = of_object_message_wire_length_get;
+    obj->wire_length_set = of_object_message_wire_length_set;
+""")
+    else:
+        if loxi_utils.class_is_tlv16(cls):
+            if not (cls in type_maps.inheritance_map): # Don't set for super
+                out.write("""
+    obj->wire_length_set = of_tlv16_wire_length_set;
+    obj->wire_type_set = of_tlv16_wire_object_id_set;\
+""")
+            out.write("""
+    obj->wire_length_get = of_tlv16_wire_length_get;
+""")
+            if loxi_utils.class_is_action(cls):
+                out.write("""
+    obj->wire_type_get = of_action_wire_object_id_get;
+""")
+            if loxi_utils.class_is_action_id(cls):
+                out.write("""
+    obj->wire_type_get = of_action_id_wire_object_id_get;
+""")
+            if loxi_utils.class_is_instruction(cls):
+                out.write("""
+    obj->wire_type_get = of_instruction_wire_object_id_get;
+""")
+            if loxi_utils.class_is_queue_prop(cls):
+                    out.write("""
+    obj->wire_type_get = of_queue_prop_wire_object_id_get;
+""")
+            if loxi_utils.class_is_table_feature_prop(cls):
+                    out.write("""
+    obj->wire_type_get = of_table_feature_prop_wire_object_id_get;
+""")
+            if loxi_utils.class_is_meter_band(cls):
+                    out.write("""
+    obj->wire_type_get = of_meter_band_wire_object_id_get;
+""")
+            if loxi_utils.class_is_hello_elem(cls):
+                    out.write("""
+    obj->wire_type_get = of_hello_elem_wire_object_id_get;
+""")
+        if loxi_utils.class_is_oxm(cls):
+            out.write("""
+    obj->wire_length_get = of_oxm_wire_length_get;
+    obj->wire_length_set = of_oxm_wire_length_set;
+    obj->wire_type_get = of_oxm_wire_object_id_get;
+    obj->wire_type_set = of_oxm_wire_object_id_set;
+""")
+        if loxi_utils.class_is_u16_len(cls):
+            out.write("""
+    obj->wire_length_get = of_u16_len_wire_length_get;
+    obj->wire_length_set = of_u16_len_wire_length_set;
+""")
+        if cls == "of_packet_queue":
+            out.write("""
+    obj->wire_length_get = of_packet_queue_wire_length_get;
+    obj->wire_length_set = of_packet_queue_wire_length_set;
+""")
+#        if cls == "of_list_meter_band_stats":
+#            out.write("""
+#    obj->wire_length_get = of_list_meter_band_stats_wire_length_get;
+#""")
+        if cls == "of_meter_stats":
+            out.write("""
+    obj->wire_length_get = of_meter_stats_wire_length_get;
+    obj->wire_length_set = of_meter_stats_wire_length_set;
+""")
+
+    if config_check("gen_fn_ptrs"):
+        if loxi_utils.class_is_list(cls):
+            out.write("""
+    obj->first = %(cls)s_first;
+    obj->next = %(cls)s_next;
+    obj->append = %(cls)s_append;
+    obj->append_bind = %(cls)s_append_bind;
+""" % dict(cls=cls))
+        else:
+            instantiate_fn_ptrs(cls, 4, out)
+
+def gen_new_function_definitions(out):
+    """
+    Generate the new operator for all classes
+
+    @param out The file to which to write the functions
+    """
+
+    out.write("\n/* New operators for each message class */\n")
+    for cls in of_g.standard_class_order:
+        out.write("\n/* New operators for %s */\n" % cls)
+        gen_new_fn_body(cls, out)
+        gen_init_fn_body(cls, out)
+        if loxi_utils.class_is_message(cls):
+            gen_from_message_fn_body(cls, out)
+
+def gen_init_map(out):
+    """
+    Generate map from object ID to type coerce function
+    """
+    out.write("""
+/**
+ * Map from object ID to type coerce function
+ */
+of_object_init_f of_object_init_map[] = {
+    (of_object_init_f)NULL,
+""")
+    count = 1
+    for i, cls in enumerate(of_g.standard_class_order):
+        if count != of_g.unified[cls]["object_id"]:
+            print "Error in class mapping: object IDs not sequential"
+            print cls, count, of_g.unified[cls]["object_id"]
+            sys.exit(1)
+        s = "(of_object_init_f)%s_init" % cls
+        if cls in type_maps.inheritance_map:
+            s = "(of_object_init_f)%s_header_init" % cls
+        if i < len(of_g.standard_class_order) - 1:
+            s += ","
+        out.write("    %-65s /* %d */\n" % (s, count))
+        count += 1
+    out.write("};\n")
+
+"""
+Document generation functions
+
+The main reason this is here is to generate documentation per
+accessor that indicates the versions that support the interface.
+"""
+
+
+def gen_accessor_doc(out, name):
+    """
+    Generate documentation for each accessor function that indicates
+    the versions supporting the accessor.
+    """
+
+    common_top_matter(out, name)
+
+    out.write("/* DOCUMENTATION ONLY */\n")
+
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            pass # Check this
+
+        out.write("""
+/**
+ * Structure for %(cls)s object.  Get/set 
+ * accessors available in all versions unless noted otherwise
+ *
+""" % dict(cls=cls))
+        if loxi_utils.class_is_list(cls):
+            out.write("""\
+ * @param first Function of type %(cls)s_first_f.
+ * Setup a TBD class object to the first entry in the list
+ * @param next Function of type %(cls)s_next_f.
+ * Advance a TBD class object to the next entry in the list
+ * @param append_bind Function of type %(cls)s_append_bind_f
+ * Setup a TBD class object for append to the end of the current list
+ * @param append  Function of type @ref %(cls)s_append_f.
+ * Copy an item to the end of a list
+""" % dict(cls=cls))
+
+        for m_name in of_g.ordered_members[cls]:
+            if m_name in of_g.skip_members:
+                continue
+            ver_type_map = field_ver_get(cls, m_name)
+            (m_type, get_rv) = get_acc_rv(cls, m_name)
+            if len(ver_type_map) == 3:
+                # ver_string = "Available in all versions"
+                ver_string = ""
+            else:
+                ver_string = "("
+                for ver in sorted(ver_type_map):
+                    ver_string += " " + of_g.short_version_names[ver]
+                ver_string += ")."
+
+            f_name = acc_name(cls, m_name)
+            out.write("""\
+ * @param %(m_name)s_get/set %(ver_string)s
+ *   Accessors for %(m_name)s, a variable of type %(m_type)s.  Functions
+ *   are of type %(f_name)s_get_f and _set_f.
+ *
+""" % dict(f_name=f_name, m_name=m_name, ver_string=ver_string, m_type=m_type))
+
+        out.write("""\
+ */
+typedef struct %(cls)s_s %(cls)s_t;
+""" % dict(cls=cls))
+
+    out.write("#endif /* _LOCI_DOC_H_ */\n")
+
+################################################################
+#
+# For fun, here are some unified, traditional C structure representation
+#
+################################################################
+
+def gen_cof_to_wire(out):
+    pass
+
+def gen_wire_to_cof(out):
+    pass
+
+def gen_cof_instance(out, cls):
+    out.write("struct c%s_s {\n" % cls)
+    for m in of_g.ordered_members[cls]:
+        if m in of_g.skip_members:
+            continue
+        entry = of_g.unified[cls]["union"][m]
+        cof_type = type_to_cof_type(entry["m_type"])
+        out.write("    %-20s %s;\n" % (cof_type, m))
+    out.write("};\n\n");
+
+def gen_cof_structs(out):
+    """
+    Generate non-version specific (common) representation of structures
+
+    @param out The file to which to write the functions
+    """
+
+    out.write("\n/* Common, unified OpenFlow structure representations */\n")
+    for cls in of_g.standard_class_order:
+        if cls in type_maps.inheritance_map:
+            continue
+        gen_cof_instance(out, cls)
+
+################################################################
+#
+# Generate code samples for applications.
+#
+################################################################
+
+def gen_code_samples(out, name):
+    out.write("""
+#if 0 /* Do not compile in */
+/**
+ * @file %(name)s
+ *
+ * These are code samples for inclusion in other components
+ */
+
+""" % dict(name=name))
+
+    gen_jump_table_template(out)
+    # These are messages that a switch might expect.
+    msg_list = ["of_echo_request",
+                "of_hello",
+                "of_packet_in",
+                "of_packet_out",
+                "of_port_mod",
+                "of_port_stats_request",
+                "of_queue_get_config_request",
+                "of_queue_stats_request",
+                "of_flow_add",
+                "of_flow_modify",
+                "of_flow_modify_strict",
+                "of_flow_delete",
+                "of_flow_delete_strict",
+                "of_get_config_request",
+                "of_flow_stats_request",
+                "of_barrier_request",
+                "of_echo_reply",
+                "of_aggregate_stats_request",
+                "of_desc_stats_request",
+                "of_table_stats_request",
+                "of_features_request",
+                "of_table_mod",
+                "of_set_config",
+                "of_experimenter",
+                "of_experimenter_stats_request",
+                "of_group_desc_stats_request",
+                "of_group_features_stats_request",
+                "of_role_request"]
+
+    gen_message_handler_templates(out, msgs=msg_list)
+
+    out.write("""
+#endif
+""")
+
+def gen_jump_table_template(out=sys.stdout, all_unhandled=True,
+                            cxn_type="ls_cxn_handle_t", 
+                            unhandled="unhandled_message"):
+    """
+    Generate a template for a jump table.
+    @param out The file to which to write the functions
+    """
+    out.write("""
+/*
+ * Simple jump table definition for message handling
+ */
+typedef int (*msg_handler_f)(%(cxn_type)s cxn, of_object_t *obj);
+typedef msg_handler_f msg_jump_table_t[OF_MESSAGE_OBJECT_COUNT];
+
+/* Jump table template for message objects */
+extern msg_jump_table_t jump_table;
+
+/* C-code template */
+msg_jump_table_t jump_table = {
+    %(unhandled)s, /* OF_OBJECT; place holder for generic object  */
+""" % dict(unhandled=unhandled, cxn_type=cxn_type))
+    count = 0
+    fn_name = unhandled
+    for cls in of_g.ordered_messages:
+        comma = ","
+        count += 1
+        if count == len(of_g.ordered_messages):
+            comma = " "
+        if not all_unhandled:
+            fn_name = "%s_handler" % cls[3:]
+        out.write("    %s%s /* %s */\n" % (fn_name, comma, enum_name(cls)))
+            
+    out.write("};\n")
+
+def gen_message_switch_stmt_tmeplate(out=sys.stdout, all_unhandled=True,
+                                     cxn_type="ls_cxn_handle_t", 
+                                     unhandled="unhandled_message"):
+    out.write("""
+/*
+ * Simple switch statement for message handling
+ */
+
+    switch (obj->object_id):
+""")
+    fn_name = unhandled
+    for cls in of_g.ordered_messages:
+        if not all_unhandled:
+            fn_name = "%s_handler" % cls[3:]
+        out.write("""
+    case %(enum)s:
+        rv = %(fn_name)s(cxn, obj);
+        break;
+""" % dict(fn_name=fn_name, cls=cls, enum=enum_name(cls)))
+    out.write("""
+    default:
+        rv = LS_ERROR_PARAM;
+        break;
+    }
+
+    TRACE("Handled msg %p with rv %d (%s)", obj, rv, ls_error_strings[rv]);
+
+    return rv;
+""")
+
+
+def gen_message_handler_templates(out=sys.stdout, cxn_type="ls_cxn_handle_t",
+                                  unhandled="unhandled_message", msgs=None):
+    gen_jump_table_template(out, False, cxn_type)
+    out.write("""
+/**
+ * Function for unhandled message
+ */
+static int
+unhandled_message(%(cxn_type)s cxn, of_object_t *obj)
+{
+    (void)cxn;
+    (void)obj;
+    TRACE("Unhandled message %%p.  Object id %%d", obj, obj->object_id);
+
+    return LS_ERROR_UNAVAIL;
+}
+""" % dict(unhandled=unhandled, cxn_type=cxn_type))
+
+    if not msgs:
+        msgs = of_g.ordered_messages
+    for cls in msgs:
+        out.write("""
+/**
+ * Handle a %(s_cls)s message
+ * @param cxn Connection handler for the owning connection
+ * @param _obj Generic type object for the message to be coerced
+ * @returns Error code
+ */
+
+static int
+%(s_cls)s_handler(%(cxn_type)s cxn, of_object_t *_obj)
+{
+    %(cls)s_t *obj;
+
+    TRACE("Handling %(cls)s message: %%p.", obj);
+    obj = (%(cls)s_t *)_obj;
+
+    /* Handle object of type %(cls)s_t */
+
+    return LS_ERROR_NONE;
+}
+""" % dict(s_cls=cls[3:], cls=cls, cxn_type=cxn_type))
+    gen_message_switch_stmt_tmeplate(out, False, cxn_type)
+
+def gen_setup_from_add_fns(out):
+    """
+    Generate functions that setup up objects based on an add
+
+    Okay, this is getting out of hand.  We need to refactor the code
+    so that this can be done without so much pain.
+    """
+    out.write("""
+
+/* Flow stats entry setup for all versions */
+
+static int
+flow_stats_entry_setup_from_flow_add_common(of_flow_stats_entry_t *obj,
+                                            of_flow_add_t *flow_add,
+                                            of_object_t *effects,
+                                            int entry_match_offset,
+                                            int add_match_offset)
+{
+    of_list_action_t actions;
+    int entry_len, add_len;
+    of_wire_buffer_t *wbuf;
+    int abs_offset;
+    int delta;
+    uint16_t val16;
+    uint64_t cookie;
+    of_octets_t match_octets;
+
+    /* Effects may come from different places */
+    if (effects != NULL) {
+        OF_TRY(of_flow_stats_entry_actions_set(obj,
+               (of_list_action_t *)effects));
+    } else {
+        of_flow_add_actions_bind(flow_add, &actions);
+        OF_TRY(of_flow_stats_entry_actions_set(obj, &actions));
+    }
+
+    /* Transfer the match underlying object from add to stats entry */
+    wbuf = OF_OBJECT_TO_WBUF(obj);
+    entry_len = _WIRE_MATCH_PADDED_LEN(obj, entry_match_offset);
+    add_len = _WIRE_MATCH_PADDED_LEN(flow_add, add_match_offset);
+
+    match_octets.bytes = add_len;
+    match_octets.data = OF_OBJECT_BUFFER_INDEX(flow_add, add_match_offset);
+
+    /* Copy data into flow entry */
+    abs_offset = OF_OBJECT_ABSOLUTE_OFFSET(obj, entry_match_offset);
+    of_wire_buffer_replace_data(wbuf, abs_offset, entry_len,
+                                match_octets.data, add_len);
+
+    /* Not scalar, update lengths if needed */
+    delta = add_len - entry_len;
+    if (delta != 0) {
+        /* Update parent(s) */
+        of_object_parent_length_update((of_object_t *)obj, delta);
+    }
+
+    of_flow_add_cookie_get(flow_add, &cookie);
+    of_flow_stats_entry_cookie_set(obj, cookie);
+
+    of_flow_add_priority_get(flow_add, &val16);
+    of_flow_stats_entry_priority_set(obj, val16);
+
+    of_flow_add_idle_timeout_get(flow_add, &val16);
+    of_flow_stats_entry_idle_timeout_set(obj, val16);
+
+    of_flow_add_hard_timeout_get(flow_add, &val16);
+    of_flow_stats_entry_hard_timeout_set(obj, val16);
+
+    return OF_ERROR_NONE;
+}
+
+/* Flow removed setup for all versions */
+
+static int
+flow_removed_setup_from_flow_add_common(of_flow_removed_t *obj,
+                                        of_flow_add_t *flow_add,
+                                        int removed_match_offset,
+                                        int add_match_offset)
+{
+    int add_len, removed_len;
+    of_wire_buffer_t *wbuf;
+    int abs_offset;
+    int delta;
+    uint16_t val16;
+    uint64_t cookie;
+    of_octets_t match_octets;
+
+    /* Transfer the match underlying object from add to removed obj */
+    wbuf = OF_OBJECT_TO_WBUF(obj);
+    removed_len = _WIRE_MATCH_PADDED_LEN(obj, removed_match_offset);
+    add_len = _WIRE_MATCH_PADDED_LEN(flow_add, add_match_offset);
+
+    match_octets.bytes = add_len;
+    match_octets.data = OF_OBJECT_BUFFER_INDEX(flow_add, add_match_offset);
+
+    /* Copy data into flow removed */
+    abs_offset = OF_OBJECT_ABSOLUTE_OFFSET(obj, removed_match_offset);
+    of_wire_buffer_replace_data(wbuf, abs_offset, removed_len,
+                                match_octets.data, add_len);
+
+    /* Not scalar, update lengths if needed */
+    delta = add_len - removed_len;
+    if (delta != 0) {
+        /* Update parent(s) */
+        of_object_parent_length_update((of_object_t *)obj, delta);
+    }
+
+    of_flow_add_cookie_get(flow_add, &cookie);
+    of_flow_removed_cookie_set(obj, cookie);
+
+    of_flow_add_priority_get(flow_add, &val16);
+    of_flow_removed_priority_set(obj, val16);
+
+    of_flow_add_idle_timeout_get(flow_add, &val16);
+    of_flow_removed_idle_timeout_set(obj, val16);
+ 
+    if (obj->version >= OF_VERSION_1_2) {
+        of_flow_add_hard_timeout_get(flow_add, &val16);
+        of_flow_removed_hard_timeout_set(obj, val16);
+    }
+
+    return OF_ERROR_NONE;
+}
+
+/* Set up a flow removed message from the original add */
+
+int
+of_flow_removed_setup_from_flow_add(of_flow_removed_t *obj,
+                                    of_flow_add_t *flow_add)
+{
+    switch (obj->version) {
+    case OF_VERSION_1_0:
+        return flow_removed_setup_from_flow_add_common(obj, flow_add, 
+                                                       8, 8);
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        return flow_removed_setup_from_flow_add_common(obj, flow_add, 
+                                                       48, 48);
+        break;
+    default:
+        return OF_ERROR_VERSION;
+        break;
+    }
+
+    return OF_ERROR_NONE;
+}
+
+
+/* Set up a packet in message from the original add */
+
+int
+of_packet_in_setup_from_flow_add(of_packet_in_t *obj,
+                                 of_flow_add_t *flow_add)
+{
+    int add_len, pkt_in_len;
+    of_wire_buffer_t *wbuf;
+    int abs_offset;
+    int delta;
+    const int pkt_in_match_offset = 16;
+    const int add_match_offset = 48;
+    of_octets_t match_octets;
+
+    if (obj->version < OF_VERSION_1_2) {
+        /* Nothing to be done before OF 1.2 */
+        return OF_ERROR_NONE;
+    }
+
+    /* Transfer match struct from flow add to packet in object */
+    wbuf = OF_OBJECT_TO_WBUF(obj);
+    pkt_in_len = _WIRE_MATCH_PADDED_LEN(obj, pkt_in_match_offset);
+    add_len = _WIRE_MATCH_PADDED_LEN(flow_add, add_match_offset);
+
+    match_octets.bytes = add_len;
+    match_octets.data = OF_OBJECT_BUFFER_INDEX(flow_add, add_match_offset);
+
+    /* Copy data into pkt_in msg */
+    abs_offset = OF_OBJECT_ABSOLUTE_OFFSET(obj, pkt_in_match_offset);
+    of_wire_buffer_replace_data(wbuf, abs_offset, pkt_in_len,
+                                match_octets.data, add_len);
+
+    /* Not scalar, update lengths if needed */
+    delta = add_len - pkt_in_len;
+    if (delta != 0) {
+        /* Update parent(s) */
+        of_object_parent_length_update((of_object_t *)obj, delta);
+    }
+
+    return OF_ERROR_NONE;
+}
+
+/* Set up a stats entry from the original add */
+
+int
+of_flow_stats_entry_setup_from_flow_add(of_flow_stats_entry_t *obj,
+                                        of_flow_add_t *flow_add,
+                                        of_object_t *effects)
+{
+    switch (obj->version) {
+    case OF_VERSION_1_0:
+        return flow_stats_entry_setup_from_flow_add_common(obj, flow_add,
+                                                           effects, 4, 8);
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        return flow_stats_entry_setup_from_flow_add_common(obj, flow_add, 
+                                                           effects, 48, 48);
+        break;
+    default:
+        return OF_ERROR_VERSION;
+    }
+
+    return OF_ERROR_NONE;
+}
+""")
diff --git a/c_gen/c_dump_gen.py b/c_gen/c_dump_gen.py
new file mode 100644
index 0000000..dbf1e7a
--- /dev/null
+++ b/c_gen/c_dump_gen.py
@@ -0,0 +1,270 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Dump function generation
+
+Generates dump function files.
+
+"""
+
+import sys
+import of_g
+import loxi_front_end.match as match
+import loxi_front_end.flags as flags
+from generic_utils import *
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.identifiers as identifiers
+from c_test_gen import var_name_map
+
+def gen_obj_dump_h(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Header file for object dumping. 
+ */
+
+/**
+ * Dump  object declarations
+ *
+ * Routines that emit a dump of each object.
+ *
+ */
+
+#if !defined(_LOCI_OBJ_DUMP_H_)
+#define _LOCI_OBJ_DUMP_H_
+
+#include <loci/loci.h>
+#include <stdio.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+
+/**
+ * Dump any OF object. 
+ */
+int of_object_dump(loci_writer_f writer, void* cookie, of_object_t* obj); 
+
+
+
+
+
+
+""")
+
+    type_to_emitter = dict(
+
+        )
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""\
+int %(cls)s_%(ver_name)s_dump(loci_writer_f writer, void* cookie, %(cls)s_t *obj);
+""" % dict(cls=cls, ver_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+#endif /* _LOCI_OBJ_DUMP_H_ */
+""")
+
+def gen_obj_dump_c(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Source file for object dumping. 
+ * 
+ */
+
+#define DISABLE_WARN_UNUSED_RESULT
+#include <loci/loci.h>
+#include <loci/loci_dump.h>
+#include <loci/loci_obj_dump.h>
+
+static int
+unknown_dump(loci_writer_f writer, void* cookie, of_object_t *obj)
+{
+    return writer(cookie, "Unable to print object of type %d, version %d\\n", 
+                         obj->object_id, obj->version);
+}    
+""")
+
+    for version in of_g.of_version_range:
+        ver_name = loxi_utils.version_to_name(version)
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""
+int
+%(cls)s_%(ver_name)s_dump(loci_writer_f writer, void* cookie, %(cls)s_t *obj)
+{
+    int out = 0;
+""" % dict(cls=cls, ver_name=ver_name))
+
+            members, member_types = loxi_utils.all_member_types_get(cls, version)
+            for m_type in member_types:
+                if loxi_utils.type_is_scalar(m_type) or m_type in \
+                        ["of_match_t", "of_octets_t"]:
+                    # Declare instance of these
+                    out.write("    %s %s;\n" % (m_type, var_name_map(m_type)))
+                else:
+                    out.write("""
+    %(m_type)s %(v_name)s;
+"""  % dict(m_type=m_type, v_name=var_name_map(m_type)))
+                    if loxi_utils.class_is_list(m_type):
+                        base_type = loxi_utils.list_to_entry_type(m_type)
+                        out.write("    %s elt;\n    int rv;\n" % base_type)
+            out.write("""
+    out += writer(cookie, "Object of type %(cls)s\\n");
+""" % dict(cls=cls))
+            for member in members:
+                m_type = member["m_type"]
+                m_name = member["name"]
+                emitter = "LOCI_DUMP_" + loxi_utils.type_to_short_name(m_type)
+                if loxi_utils.skip_member_name(m_name):
+                    continue
+                if (loxi_utils.type_is_scalar(m_type) or
+                    m_type in ["of_match_t", "of_octets_t"]):
+                    out.write("""
+    %(cls)s_%(m_name)s_get(obj, &%(v_name)s);
+    out += writer(cookie, "  %(m_name)s (%(m_type)s):  ");
+    out += %(emitter)s(writer, cookie, %(v_name)s);
+    out += writer(cookie, "\\n");
+""" % dict(cls=cls, m_name=m_name, m_type=m_type,
+           v_name=var_name_map(m_type), emitter=emitter))
+                elif loxi_utils.class_is_list(m_type):
+                    sub_cls = m_type[:-2] # Trim _t
+                    elt_type = loxi_utils.list_to_entry_type(m_type)
+                    out.write("""
+    out += writer(cookie, "List of %(elt_type)s\\n");
+    %(cls)s_%(m_name)s_bind(obj, &%(v_name)s);
+    %(u_type)s_ITER(&%(v_name)s, &elt, rv) {
+        of_object_dump(writer, cookie, (of_object_t *)&elt);
+    }
+""" % dict(sub_cls=sub_cls, u_type=sub_cls.upper(), v_name=var_name_map(m_type),
+           elt_type=elt_type, cls=cls, m_name=m_name, m_type=m_type))
+                else:
+                    sub_cls = m_type[:-2] # Trim _t
+                    out.write("""
+    %(cls)s_%(m_name)s_bind(obj, &%(v_name)s);
+    out += %(sub_cls)s_%(ver_name)s_dump(writer, cookie, &%(v_name)s);
+""" % dict(cls=cls, sub_cls=sub_cls, m_name=m_name, 
+           v_name=var_name_map(m_type), ver_name=ver_name))
+
+            out.write("""
+    return out;
+}
+""")
+    out.write("""
+/**
+ * Log a match entry
+ */
+int
+loci_dump_match(loci_writer_f writer, void* cookie, of_match_t *match)
+{
+    int out = 0;
+
+    out += writer(cookie, "Match obj, version %d.\\n", match->version);
+""")
+
+    for key, entry in match.of_match_members.items():
+        m_type = entry["m_type"]
+        emitter = "LOCI_DUMP_" + loxi_utils.type_to_short_name(m_type)
+        out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(match)) {
+        out += writer(cookie, "  %(key)s (%(m_type)s) active: Value ");
+        out += %(emitter)s(writer, cookie, match->fields.%(key)s);
+        out += writer(cookie, "\\n    Mask ");
+        out += %(emitter)s(writer, cookie, match->masks.%(key)s);
+        out += writer(cookie, "\\n");
+    }
+""" % dict(key=key, ku=key.upper(), emitter=emitter, m_type=m_type))
+
+    out.write("""
+    return out;
+}
+""")
+
+    # Generate big table indexed by version and object
+    for version in of_g.of_version_range:
+        out.write("""
+static loci_obj_dump_f dump_funs_v%(version)s[OF_OBJECT_COUNT] = {
+""" % dict(version=version))
+        out.write("    unknown_dump, /* of_object, not a valid specific type */\n")
+        for j, cls in enumerate(of_g.all_class_order):
+            comma = ""
+            if j < len(of_g.all_class_order) - 1: # Avoid ultimate comma
+                comma = ","
+
+            if (not loxi_utils.class_in_version(cls, version) or 
+                    cls in type_maps.inheritance_map):
+                out.write("    unknown_dump%s\n" % comma);
+            else:
+                out.write("    %s_%s_dump%s\n" % 
+                          (cls, loxi_utils.version_to_name(version), comma))
+        out.write("};\n\n")
+
+    out.write("""
+static loci_obj_dump_f *dump_funs[5] = {
+    NULL,
+    dump_funs_v1,
+    dump_funs_v2,
+    dump_funs_v3,
+    dump_funs_v4
+};
+
+int
+of_object_dump(loci_writer_f writer, void* cookie, of_object_t *obj)
+{
+    if ((obj->object_id > 0) && (obj->object_id < OF_OBJECT_COUNT)) {
+        if (((obj)->version > 0) && ((obj)->version <= OF_VERSION_1_2)) {
+            /* @fixme VERSION */
+            return dump_funs[obj->version][obj->object_id](writer, cookie, (of_object_t *)obj);
+        } else {
+            return writer(cookie, "Bad version %d\\n", obj->version);
+        }
+    }
+    return writer(cookie, "Bad object id %d\\n", obj->object_id);
+}
+""")
+
diff --git a/c_gen/c_match.py b/c_gen/c_match.py
new file mode 100644
index 0000000..1b5be1e
--- /dev/null
+++ b/c_gen/c_match.py
@@ -0,0 +1,1317 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+# @brief Generate wire to generic match conversion functions
+#
+# @fixme This has lots of C specific code that should be moved into c_gen
+
+# of_match_to_wire_match(match, wire_match)
+# of_wire_match_to_match(wire_match, match)
+#    Version is taken from the source in each case
+#
+# name
+# type
+# conditions
+# v3 ident
+# takes mask
+
+import sys
+import of_g
+import loxi_front_end.oxm as oxm
+import loxi_front_end.match as match
+import c_code_gen
+
+def match_c_top_matter(out, name):
+    """
+    Generate top matter for match C file
+
+    @param name The name of the output file
+    @param out The output file object
+    """
+    c_code_gen.common_top_matter(out, name)
+    out.write("#include \"loci_log.h\"\n")
+    out.write("#include <loci/loci.h>\n")
+
+def match_h_top_matter(out, name):
+    """
+    Generate top matter for the C file
+
+    @param name The name of the output file
+    @param ih_name The name of the internal header file
+    @param out The output file object
+    """
+    c_code_gen.common_top_matter(out, name)
+    out.write("""
+#include <loci/loci_base.h>
+""")
+
+def gen_declarations(out):
+    out.write("""
+/*
+ * Match serialize/deserialize declarations
+ * Wire match conversion function declarations
+ */
+extern int of_match_serialize(of_version_t version, of_match_t *match,
+                              of_octets_t *octets);
+extern int of_match_deserialize(of_version_t version, of_match_t *match,
+                                of_octets_t *octets);
+extern int of_match_v1_to_match(of_match_v1_t *src, of_match_t *dst);
+extern int of_match_v2_to_match(of_match_v2_t *src, of_match_t *dst);
+extern int of_match_v3_to_match(of_match_v3_t *src, of_match_t *dst);
+extern int of_match_to_wire_match_v1(of_match_t *src, of_match_v1_t *dst);
+extern int of_match_to_wire_match_v2(of_match_t *src, of_match_v2_t *dst);
+extern int of_match_to_wire_match_v3(of_match_t *src, of_match_v3_t *dst);
+""")
+
+def gen_v4_match_compat(out):
+    """
+    Code for coercing version 1.3 matches to 1.2 matches
+
+    @FIXME This is a stopgap and needs to get cleaned up.
+    """
+    out.write("""
+/**
+ * Definitions to coerce v4 match (version 1.3) to v3 matches
+ * (version 1.2).
+ * @FIXME This is a stopgap and needs to get cleaned up.
+ */
+#define of_match_v4_t of_match_v3_t
+#define of_match_v4_init of_match_v3_init
+#define of_match_v4_new of_match_v3_new
+#define of_match_v4_to_match of_match_v3_to_match
+#define of_match_to_wire_match_v4 of_match_to_wire_match_v3
+#define of_match_v4_delete of_match_v3_delete
+""")
+
+def gen_match_macros(out):
+    out.write("""
+
+/**
+ * Definitions for wildcard macros for OF_VERSION_1_0
+ */
+
+""")
+    for key in match.of_v1_keys:
+        entry = match.of_match_members[key]
+        if "v1_wc_shift" in entry:
+            if key in ["ipv4_src", "ipv4_dst"]:
+                out.write("""
+#define OF_MATCH_V1_WC_%(ku)s_SHIFT %(val)d
+#define OF_MATCH_V1_WC_%(ku)s_MASK (0x3f << %(val)d)
+#define OF_MATCH_V1_WC_%(ku)s_CLEAR(wc) ((wc) &= ~(0x3f << %(val)d))
+#define OF_MATCH_V1_WC_%(ku)s_SET(wc, value) do {   \\
+        OF_MATCH_V1_WC_%(ku)s_CLEAR(wc); \\
+        ((wc) |= (((value) & 0x3f) << %(val)d)); \\
+    } while (0)
+#define OF_MATCH_V1_WC_%(ku)s_TEST(wc) ((wc) & (0x3f << %(val)d))
+#define OF_MATCH_V1_WC_%(ku)s_GET(wc) (((wc) >> %(val)d) & 0x3f)
+""" % dict(ku=key.upper(), val=entry["v1_wc_shift"]))
+            else:
+                out.write("""
+#define OF_MATCH_V1_WC_%(ku)s_SHIFT %(val)d
+#define OF_MATCH_V1_WC_%(ku)s_MASK (1 << %(val)d)
+#define OF_MATCH_V1_WC_%(ku)s_SET(wc) ((wc) |= (1 << %(val)d))
+#define OF_MATCH_V1_WC_%(ku)s_CLEAR(wc) ((wc) &= ~(1 << %(val)d))
+#define OF_MATCH_V1_WC_%(ku)s_TEST(wc) ((wc) & (1 << %(val)d))
+""" % dict(ku=key.upper(), val=entry["v1_wc_shift"]))
+
+    out.write("""
+
+/**
+ * Definitions for wildcard macros for OF_VERSION_1_1
+ */
+""")
+
+    for key in sorted(match.of_v2_keys):
+        entry = match.of_match_members[key]
+        if "v2_wc_shift" in entry:
+            out.write("""
+#define OF_MATCH_V2_WC_%(ku)s_SHIFT %(val)d
+#define OF_MATCH_V2_WC_%(ku)s_MASK (1 << %(val)d)
+#define OF_MATCH_V2_WC_%(ku)s_SET(wc) ((wc) |= (1 << %(val)d))
+#define OF_MATCH_V2_WC_%(ku)s_CLEAR(wc) ((wc) &= ~(1 << %(val)d))
+#define OF_MATCH_V2_WC_%(ku)s_TEST(wc) ((wc) & (1 << %(val)d))
+""" % dict(ku=key.upper(), val=entry["v2_wc_shift"]))
+
+
+def gen_match_struct(out=sys.stdout):
+    out.write("/* Unified, flat OpenFlow match structure based on OF 1.2 */\n")
+    out.write("typedef struct of_match_fields_s {\n")
+    out.write("    /* Version 1.2 is used for field names */\n")
+    for name in match.match_keys_sorted:
+        entry = match.of_match_members[name]
+        out.write("    %-20s %s;\n" % (entry["m_type"], entry["name"]))
+    out.write("""
+} of_match_fields_t;
+
+/**
+ * @brief The LOCI match structure.
+ */
+
+typedef struct of_match_s {
+    of_version_t version;
+    of_match_fields_t fields;
+    of_match_fields_t masks;
+} of_match_t;
+
+/**
+ * IP Mask map.  IP maks wildcards from OF 1.0 are interpretted as
+ * indices into the map below.
+ *
+ * of_ip_mask_map: Array mapping index to mask
+ * of_ip_mask_use_map: Boolean indication set when map is initialized
+ * of_ip_mask_map_init: Initialize to default values; set "use map".
+ */
+#define OF_IP_MASK_MAP_COUNT 64
+extern uint32_t of_ip_mask_map[OF_IP_MASK_MAP_COUNT];
+extern int of_ip_mask_map_init_done;
+
+#define OF_IP_MASK_INIT_CHECK \
+    if (!of_ip_mask_map_init_done) of_ip_mask_map_init()
+
+/**
+ * Initialize map
+ */
+extern void of_ip_mask_map_init(void);
+
+extern int of_ip_mask_map_set(int index, uint32_t mask);
+extern int of_ip_mask_map_get(int index, uint32_t *mask);
+
+/**
+ * @brief Map from mask to index
+ */
+
+extern int of_ip_mask_to_index(uint32_t mask);
+
+/**
+ * @brief Map from index to mask
+ */
+
+extern uint32_t of_ip_index_to_mask(int index);
+
+/**
+ * The signalling of an untagged packet varies by OF version.
+ * Use this macro to set the field value.
+ */
+#define OF_MATCH_UNTAGGED_VLAN_ID(version) \\
+    ((version) == OF_VERSION_1_0 ? 0xffff : \\
+     ((version) == OF_VERSION_1_1 ? 0xffff : 0))
+
+/**
+ * Version 1.1 had the notion of "any" vlan but must be set
+ */
+#define OF_MATCH_VLAN_TAG_PRESENT_ANY_ID(version) \\
+    ((version) == OF_VERSION_1_0 ? 0 /* @fixme */  : \\
+     ((version) == OF_VERSION_1_1 ? 0xfffe : 0x1000))
+""")
+
+def gen_oxm_defines(out):
+    """
+    Generate verbatim definitions for OXM
+    """
+    out.write("""
+
+/* These are from the OpenFlow 1.2 header file */
+
+/* OXM index values for bitmaps and parsing */
+enum of_oxm_index_e {
+    OF_OXM_INDEX_IN_PORT        = 0,  /* Switch input port. */
+    OF_OXM_INDEX_IN_PHY_PORT    = 1,  /* Switch physical input port. */
+    OF_OXM_INDEX_METADATA       = 2,  /* Metadata passed between tables. */
+    OF_OXM_INDEX_ETH_DST        = 3,  /* Ethernet destination address. */
+    OF_OXM_INDEX_ETH_SRC        = 4,  /* Ethernet source address. */
+    OF_OXM_INDEX_ETH_TYPE       = 5,  /* Ethernet frame type. */
+    OF_OXM_INDEX_VLAN_VID       = 6,  /* VLAN id. */
+    OF_OXM_INDEX_VLAN_PCP       = 7,  /* VLAN priority. */
+    OF_OXM_INDEX_IP_DSCP        = 8,  /* IP DSCP (6 bits in ToS field). */
+    OF_OXM_INDEX_IP_ECN         = 9,  /* IP ECN (2 bits in ToS field). */
+    OF_OXM_INDEX_IP_PROTO       = 10, /* IP protocol. */
+    OF_OXM_INDEX_IPV4_SRC       = 11, /* IPv4 source address. */
+    OF_OXM_INDEX_IPV4_DST       = 12, /* IPv4 destination address. */
+    OF_OXM_INDEX_TCP_SRC        = 13, /* TCP source port. */
+    OF_OXM_INDEX_TCP_DST        = 14, /* TCP destination port. */
+    OF_OXM_INDEX_UDP_SRC        = 15, /* UDP source port. */
+    OF_OXM_INDEX_UDP_DST        = 16, /* UDP destination port. */
+    OF_OXM_INDEX_SCTP_SRC       = 17, /* SCTP source port. */
+    OF_OXM_INDEX_SCTP_DST       = 18, /* SCTP destination port. */
+    OF_OXM_INDEX_ICMPV4_TYPE    = 19, /* ICMP type. */
+    OF_OXM_INDEX_ICMPV4_CODE    = 20, /* ICMP code. */
+    OF_OXM_INDEX_ARP_OP         = 21, /* ARP opcode. */
+    OF_OXM_INDEX_ARP_SPA        = 22, /* ARP source IPv4 address. */
+    OF_OXM_INDEX_ARP_TPA        = 23, /* ARP target IPv4 address. */
+    OF_OXM_INDEX_ARP_SHA        = 24, /* ARP source hardware address. */
+    OF_OXM_INDEX_ARP_THA        = 25, /* ARP target hardware address. */
+    OF_OXM_INDEX_IPV6_SRC       = 26, /* IPv6 source address. */
+    OF_OXM_INDEX_IPV6_DST       = 27, /* IPv6 destination address. */
+    OF_OXM_INDEX_IPV6_FLABEL    = 28, /* IPv6 Flow Label */
+    OF_OXM_INDEX_ICMPV6_TYPE    = 29, /* ICMPv6 type. */
+    OF_OXM_INDEX_ICMPV6_CODE    = 30, /* ICMPv6 code. */
+    OF_OXM_INDEX_IPV6_ND_TARGET = 31, /* Target address for ND. */
+    OF_OXM_INDEX_IPV6_ND_SLL    = 32, /* Source link-layer for ND. */
+    OF_OXM_INDEX_IPV6_ND_TLL    = 33, /* Target link-layer for ND. */
+    OF_OXM_INDEX_MPLS_LABEL     = 34, /* MPLS label. */
+    OF_OXM_INDEX_MPLS_TC        = 35, /* MPLS TC. */
+};
+
+#define OF_OXM_BIT(index) (((uint64_t) 1) << (index))
+
+/*
+ * The generic match structure uses the OXM bit indices for it's
+ * bitmasks for active and masked values
+ */
+""")
+    for key, entry in match.of_match_members.items():
+        out.write("""
+/* Mask/value check/set macros for %(key)s */
+
+/**
+ * Set the mask for an exact match of %(key)s
+ */
+#define OF_MATCH_MASK_%(ku)s_EXACT_SET(_match)   \\
+    MEMSET(&(_match)->masks.%(key)s, 0xff, \\
+        sizeof(((_match)->masks).%(key)s))
+
+/**
+ * Clear the mask for %(key)s making that field inactive for the match
+ */
+#define OF_MATCH_MASK_%(ku)s_CLEAR(_match) \\
+    MEMSET(&(_match)->masks.%(key)s, 0, \\
+        sizeof(((_match)->masks).%(key)s))
+
+/**
+ * Test whether the match is exact for %(key)s
+ */
+#define OF_MATCH_MASK_%(ku)s_EXACT_TEST(_match) \\
+    OF_VARIABLE_IS_ALL_ONES(&(((_match)->masks).%(key)s))
+
+/**
+ * Test whether key %(key)s is being checked in the match
+ */
+#define OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(_match) \\
+    OF_VARIABLE_IS_NON_ZERO(&(((_match)->masks).%(key)s))
+
+""" % dict(key=key, bit=match.oxm_index(key), ku=key.upper()))
+
+def gen_incompat_members(out=sys.stdout):
+    """
+    Generate a macro that lists all the unified fields which are
+    incompatible with v1 matches
+    """
+    out.write("""
+/* Identify bits in unified match that are incompatible with V1, V2 matches */
+#define OF_MATCH_V1_INCOMPAT ( (uint64_t)0 """)
+    for key in match.of_match_members:
+        if key in match.of_v1_keys:
+            continue
+        out.write("\\\n    | ((uint64_t)1 << %s)" % match.oxm_index(key))
+    out.write(")\n\n")
+
+    out.write("#define OF_MATCH_V2_INCOMPAT ( (uint64_t)0 ")
+    for key in match.of_match_members:
+        if key in match.of_v2_keys:
+            continue
+        out.write("\\\n    | ((uint64_t)1 << %s)" % match.oxm_index(key))
+    out.write(""")
+
+/* Indexed by version number */
+extern uint64_t of_match_incompat[4];
+""")
+
+
+# # FIXME:  Make these version specific
+# def name_to_index(a, name, key="name"):
+#     """
+#     Given an array, a, with each entry a dict, and a name,
+#     find the entry with key matching name and return the index
+#     """
+#     count = 0
+#     for e in a:
+#         if e[key] == name:
+#             return count
+#         count += 1
+#     return -1
+
+def gen_wc_convert_literal(out):
+    """
+    A bunch of literal C code that's associated with match conversions
+    @param out The output file handle
+    """
+    out.write("""
+
+/* Some internal macros and utility functions */
+
+/* For counting bits in a uint32 */
+#define _VAL_AND_5s(v)  ((v) & 0x55555555)
+#define _VAL_EVERY_OTHER(v)  (_VAL_AND_5s(v) + _VAL_AND_5s(v >> 1))
+#define _VAL_AND_3s(v)  ((v) & 0x33333333)
+#define _VAL_PAIRS(v)  (_VAL_AND_3s(v) + _VAL_AND_3s(v >> 2))
+#define _VAL_QUADS(v)  (((val) + ((val) >> 4)) & 0x0F0F0F0F)
+#define _VAL_BYTES(v)  ((val) + ((val) >> 8))
+
+/**
+ * Counts the number of bits set in an integer
+ */
+static inline int
+_COUNT_BITS(unsigned int val)
+{
+    val = _VAL_EVERY_OTHER(val);
+    val = _VAL_PAIRS(val);
+    val = _VAL_QUADS(val);
+    val = _VAL_BYTES(val);
+
+    return (val & 0XFF) + ((val >> 16) & 0xFF);
+}
+
+/* Indexed by version number */
+uint64_t of_match_incompat[4] = {
+    -1,
+    OF_MATCH_V1_INCOMPAT,
+    OF_MATCH_V2_INCOMPAT,
+    0
+};
+
+""")
+
+
+def gen_unified_match_to_v1(out):
+    """
+    Generate C code to convert a unified match structure to a V1 match struct
+    @param out The output file handle
+    """
+
+    out.write("""
+/**
+ * Check if match is compatible with OF 1.0
+ * @param match The match being checked
+ */
+static inline int
+of_match_v1_compat_check(of_match_t *match)
+{
+""")
+    for key in match.of_match_members:
+        if key in match.of_v1_keys:
+            continue
+        out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(match)) {
+        return 0;
+    }
+""" % dict(ku=key.upper()))
+
+    out.write("""
+    return 1;
+}
+""")
+
+    out.write("""
+/**
+ * Convert a generic match object to an OF_VERSION_1_0 object
+ * @param src Pointer to the generic match object source
+ * @param dst Pointer to the OF 1.0 wire structure
+ *
+ * The wire structure is initialized by this function if it doesn't
+ * not have the proper object ID.
+ */
+
+int
+of_match_to_wire_match_v1(of_match_t *src, of_match_v1_t *dst)
+{
+    of_wc_bmap_t wildcards = 0;
+    int ip_mask_index;
+
+    if ((src == NULL) || (dst == NULL)) {
+        return OF_ERROR_PARAM;
+    }
+    if (!of_match_v1_compat_check(src)) {
+        return OF_ERROR_COMPAT;
+    }
+    if (dst->object_id != OF_MATCH_V1) {
+        of_match_v1_init(dst, OF_VERSION_1_0, 0, 0);
+    }
+""")
+    for key in sorted(match.of_v1_keys):
+        if key in ["ipv4_src", "ipv4_dst"]: # Special cases for masks here
+            out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(src)) {
+        ip_mask_index = of_ip_mask_to_index(src->masks.%(key)s);
+        of_match_v1_%(key)s_set(dst, src->fields.%(key)s);
+    } else { /* Wildcarded, look for 0 mask */
+        ip_mask_index = of_ip_mask_to_index(0);
+    }
+    OF_MATCH_V1_WC_%(ku)s_SET(wildcards, ip_mask_index);
+""" % dict(key=key, ku=key.upper()))
+        else:
+            out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(src)) {
+        of_match_v1_%(key)s_set(dst, src->fields.%(key)s);
+    } else {
+        OF_MATCH_V1_WC_%(ku)s_SET(wildcards);
+    }
+""" % dict(key=key, ku=key.upper()))
+
+    out.write("""
+    of_match_v1_wildcards_set(dst, wildcards);
+
+    return OF_ERROR_NONE;
+}
+""")
+
+def all_ones_mask(d_type):
+    if d_type == "of_mac_addr_t":
+        return "of_mac_addr_all_ones"
+    else:
+        return "((%s) -1)" % d_type
+
+def gen_unified_match_to_v2(out):
+    """
+    Generate C code to convert a unified match structure to a V2 match struct
+    @param out The output file handle
+    """
+
+    out.write("""
+/**
+ * Check if match is compatible with OF 1.0
+ * @param match The match being checked
+ */
+static inline int
+of_match_v2_compat_check(of_match_t *match)
+{
+""")
+    for key in match.of_match_members:
+        if key in match.of_v2_keys:
+            continue
+        out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(match)) {
+        return 0;
+    }
+""" % dict(ku=key.upper()))
+
+    out.write("""
+    return 1;
+}
+""")
+
+    out.write("""
+/**
+ * Convert a generic match object to an OF_VERSION_1_1 object
+ * @param src Pointer to the generic match object source
+ * @param dst Pointer to the OF 1.1 wire structure
+ *
+ * The wire structure is initialized by this function.
+ */
+
+int
+of_match_to_wire_match_v2(of_match_t *src, of_match_v2_t *dst)
+{
+    of_wc_bmap_t wildcards = 0;
+
+    if ((src == NULL) || (dst == NULL)) {
+        return OF_ERROR_PARAM;
+    }
+    if (!of_match_v2_compat_check(src)) {
+        return OF_ERROR_COMPAT;
+    }
+    if (dst->object_id != OF_MATCH_V2) {
+        of_match_v2_init(dst, OF_VERSION_1_1, 0, 0);
+    }
+""")
+    for key in match.of_v2_keys:
+        if key in match.of_v2_full_mask:
+            ones_mask = all_ones_mask(match.of_match_members[key]["m_type"])
+            out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(src)) {
+        if (!OF_MATCH_MASK_%(ku)s_EXACT_TEST(src)) {
+            of_match_v2_%(key)s_mask_set(dst,
+                src->masks.%(key)s);
+        } else { /* Exact match; use all ones mask */
+            of_match_v2_%(key)s_mask_set(dst,
+                %(ones_mask)s);
+        }
+        of_match_v2_%(key)s_set(dst, src->fields.%(key)s);
+    }
+
+""" % dict(key=key, ku=key.upper(), ones_mask=ones_mask))
+        else:
+            out.write("""
+    if (!OF_MATCH_MASK_%(ku)s_EXACT_TEST(src)) {
+        return OF_ERROR_COMPAT;
+    }
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(src)) {
+        of_match_v2_%(key)s_set(dst, src->fields.%(key)s);
+    } else {
+        OF_MATCH_V2_WC_%(ku)s_SET(wildcards);
+    }
+""" % dict(key=key, ku=key.upper(),
+           wc_bit="OF_MATCH_WC_V2_%s" % key.upper()))
+
+    out.write("""
+    of_match_v2_wildcards_set(dst, wildcards);
+
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_unified_match_to_v3(out):
+    """
+    Generate C code to convert a unified match structure to a V3 match
+
+    This is much easier as the unified struct is based on V3
+    @param out The output file handle
+    """
+    out.write("""
+static int
+populate_oxm_list(of_match_t *src, of_list_oxm_t *oxm_list)
+{
+    of_oxm_t oxm_entry;
+
+    /* For each active member, add an OXM entry to the list */
+""")
+    # @fixme Would like to generate the list in some reasonable order
+    for key, entry in match.of_match_members.items():
+        out.write("""\
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(src)) {
+        if (!OF_MATCH_MASK_%(ku)s_EXACT_TEST(src)) {
+            of_oxm_%(key)s_masked_t *elt;
+            elt = &oxm_entry.%(key)s_masked;
+
+            of_oxm_%(key)s_masked_init(elt,
+                src->version, -1, 1);
+            of_list_oxm_append_bind(oxm_list, &oxm_entry);
+            of_oxm_%(key)s_masked_value_set(elt, 
+                   src->fields.%(key)s);
+            of_oxm_%(key)s_masked_value_mask_set(elt, 
+                   src->masks.%(key)s);
+        } else {  /* Active, but not masked */
+            of_oxm_%(key)s_t *elt;
+            elt = &oxm_entry.%(key)s;
+            of_oxm_%(key)s_init(elt,
+                src->version, -1, 1);
+            of_list_oxm_append_bind(oxm_list, &oxm_entry);
+            of_oxm_%(key)s_value_set(elt, src->fields.%(key)s);
+        }
+    }
+""" % dict(key=key, ku=key.upper()))
+    out.write("""
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Convert a generic match object to an OF_VERSION_1_2 object
+ * @param src Pointer to the generic match object source
+ * @param dst Pointer to the OF 1.2 wire structure
+ *
+ * The wire structure is initialized by this function if the object
+ * id is not correct in the object
+ */
+
+int
+of_match_to_wire_match_v3(of_match_t *src, of_match_v3_t *dst)
+{
+    int rv = OF_ERROR_NONE;
+    of_list_oxm_t *oxm_list;
+
+    if ((src == NULL) || (dst == NULL)) {
+        return OF_ERROR_PARAM;
+    }
+    if (dst->object_id != OF_MATCH_V3) {
+        of_match_v3_init(dst, src->version, 0, 0);
+    }
+    if ((oxm_list = of_list_oxm_new(src->version)) == NULL) {
+        return OF_ERROR_RESOURCE;
+    }
+
+    rv = populate_oxm_list(src, oxm_list);
+
+    if (rv == OF_ERROR_NONE) {
+        rv = of_match_v3_oxm_list_set(dst, oxm_list);
+    }
+
+    of_list_oxm_delete(oxm_list);
+
+    return rv;
+}
+""")
+
+def gen_v1_to_unified_match(out):
+    """
+    Generate the code that maps a v1 wire format match object
+    to a unified match object
+    """
+    # for each v1 member, if not in wildcards
+    # translate to unified.  Treat nw_src/dst specially
+    out.write("""
+
+/**
+ * Convert an OF_VERSION_1_0 object to a generic match object
+ * @param src Pointer to the OF 1.0 wire structure source
+ * @param dst Pointer to the generic match object destination
+ *
+ * The wire structure is initialized by this function.
+ */
+
+int
+of_match_v1_to_match(of_match_v1_t *src, of_match_t *dst)
+{
+    of_wc_bmap_t wc;
+    int count;
+
+    MEMSET(dst, 0, sizeof(*dst));
+    dst->version = src->version;
+
+    of_match_v1_wildcards_get(src, &wc);
+""")
+    # Deal with nw fields first
+    out.write("""
+    /* Handle L3 src and dst wildcarding first */
+    /* @fixme Check mask values are properly treated for ipv4 src/dst */
+    if ((count = OF_MATCH_V1_WC_IPV4_DST_GET(wc)) < 32) {
+        of_match_v1_ipv4_dst_get(src, &dst->fields.ipv4_dst);
+        if (count > 0) { /* Not exact match */
+            dst->masks.ipv4_dst = ~(((uint32_t)1 << count) - 1);
+        } else {
+            OF_MATCH_MASK_IPV4_DST_EXACT_SET(dst);
+        }
+    }
+""")
+    for key in sorted(match.of_v1_keys):
+        if key in ["ipv4_src", "ipv4_dst"]: # Special cases for masks here
+            out.write("""
+    count = OF_MATCH_V1_WC_%(ku)s_GET(wc);
+    dst->masks.%(key)s = of_ip_index_to_mask(count);
+    /* @todo Review if we should only get the addr when masks.%(key)s != 0 */
+    of_match_v1_%(key)s_get(src, &dst->fields.%(key)s);
+""" % dict(ku=key.upper(), key=key))
+        else:
+            out.write("""
+    if (!(OF_MATCH_V1_WC_%(ku)s_TEST(wc))) {
+        of_match_v1_%(key)s_get(src, &dst->fields.%(key)s);
+        OF_MATCH_MASK_%(ku)s_EXACT_SET(dst);
+    }
+""" % dict(ku=key.upper(), key=key))
+
+    out.write("""
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_v2_to_unified_match(out):
+    """
+    Generate the code that maps a v2 wire format match object
+    to a unified match object
+    """
+    out.write("""
+int
+of_match_v2_to_match(of_match_v2_t *src, of_match_t *dst)
+{
+    of_wc_bmap_t wc;
+
+    MEMSET(dst, 0, sizeof(*dst));
+    dst->version = src->version;
+
+    of_match_v2_wildcards_get(src, &wc);
+""")
+    for key in match.of_v2_keys:
+        if key in match.of_v2_full_mask:
+            out.write("""
+    of_match_v2_%(key)s_mask_get(src, &dst->masks.%(key)s);
+    if (OF_VARIABLE_IS_NON_ZERO(&dst->masks.%(key)s)) { /* Matching something */
+        of_match_v2_%(key)s_get(src, &dst->fields.%(key)s);
+    }
+""" % dict(ku=key.upper(), key=key))
+        else:
+            out.write("""
+    if (!(OF_MATCH_V2_WC_%(ku)s_TEST(wc))) {
+        of_match_v2_%(key)s_get(src, &dst->fields.%(key)s);
+        OF_MATCH_MASK_%(ku)s_EXACT_SET(dst);
+    }
+""" % dict(ku=key.upper(), key=key))
+
+    out.write("""
+    return OF_ERROR_NONE;
+}
+""")
+
+
+def gen_v3_to_unified_match(out):
+    """
+    Generate the code that maps a v3 wire format match object
+    to a unified match object
+    """
+    # Iterate thru the OXM list members
+    out.write("""
+int
+of_match_v3_to_match(of_match_v3_t *src, of_match_t *dst)
+{
+    int rv;
+    of_list_oxm_t oxm_list;
+    of_oxm_t oxm_entry;
+""")
+#    for key in match.of_match_members:
+#        out.write("    of_oxm_%s_t *%s;\n" % (key, key))
+#        out.write("    of_oxm_%s_masked_t *%s_masked;\n" % (key, key))
+
+    out.write("""
+    MEMSET(dst, 0, sizeof(*dst));
+    dst->version = src->version;
+
+    of_match_v3_oxm_list_bind(src, &oxm_list);
+    rv = of_list_oxm_first(&oxm_list, &oxm_entry);
+
+    while (rv == OF_ERROR_NONE) {
+        switch (oxm_entry.header.object_id) { /* What kind of entry is this */
+""")
+    for key in match.of_match_members:
+        out.write("""
+        case OF_OXM_%(ku)s_MASKED:
+            of_oxm_%(key)s_masked_value_mask_get(
+                &oxm_entry.%(key)s_masked,
+                &dst->masks.%(key)s);
+            of_oxm_%(key)s_masked_value_get(
+                &oxm_entry.%(key)s,
+                &dst->fields.%(key)s);
+            break;
+        case OF_OXM_%(ku)s:
+            OF_MATCH_MASK_%(ku)s_EXACT_SET(dst);
+            of_oxm_%(key)s_value_get(
+                &oxm_entry.%(key)s,
+                &dst->fields.%(key)s);
+            break;
+""" % (dict(ku=key.upper(), key=key)))
+
+    out.write("""
+        default:
+             /* @fixme Add debug statement */
+             return OF_ERROR_PARSE;
+        } /* end switch */
+        rv = of_list_oxm_next(&oxm_list, &oxm_entry);
+    } /* end OXM iteration */
+
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_serialize(out):
+    out.write("""
+/**
+ * Serialize a match structure according to the version passed
+ * @param version The version to use for serialization protocol
+ * @param match Pointer to the structure to serialize
+ * @param octets Pointer to an octets object to fill out
+ *
+ * A buffer is allocated using normal internal ALLOC/FREE semantics
+ * and pointed to by the octets object.  The length of the resulting
+ * serialization is in octets->bytes.
+ *
+ * For 1.2 matches, returns the padded serialized structure
+ *
+ * Note that FREE must be called on octets->data when processing of
+ * the object is complete.
+ */
+
+int
+of_match_serialize(of_version_t version, of_match_t *match, of_octets_t *octets)
+{
+    int rv;
+
+    switch (version) {
+""")
+    for version in of_g.of_version_range:
+        out.write("""
+    case %(ver_name)s:
+        {
+            of_match_v%(version)s_t *wire_match;
+            wire_match = of_match_v%(version)s_new(version);
+            if (wire_match == NULL) {
+                return OF_ERROR_RESOURCE;
+            }
+            if ((rv = of_match_to_wire_match_v%(version)s(match, wire_match)) < 0) {
+                of_match_v%(version)s_delete(wire_match);
+                return rv;
+            }
+            octets->bytes = OF_MATCH_BYTES(wire_match->length);
+            of_object_wire_buffer_steal((of_object_t *)wire_match,
+                                        &octets->data);
+            of_match_v%(version)s_delete(wire_match);
+        }
+        break;
+""" % dict(version=version, ver_name=of_g.of_version_wire2name[version]))
+    out.write("""
+    default:
+        return OF_ERROR_COMPAT;
+    }
+
+    return OF_ERROR_NONE;
+}
+""")
+
+
+def gen_deserialize(out):
+    out.write("""
+/**
+ * Deserialize a match structure according to the version passed
+ * @param version The version to use for deserialization protocol
+ * @param match Pointer to the structure to fill out
+ * @param octets Pointer to an octets object holding serial buffer
+ *
+ * Normally the octets object will point to a part of a wire buffer.
+ */
+
+int
+of_match_deserialize(of_version_t version, of_match_t *match,
+                     of_octets_t *octets)
+{
+    if (octets->bytes == 0) { /* No match specified means all wildcards */
+        MEMSET(match, 0, sizeof(*match));
+        match->version = version;
+
+        return OF_ERROR_NONE;
+    }
+
+    switch (version) {
+""")
+    for version in of_g.of_version_range:
+        out.write("""
+    case %(ver_name)s:
+        { /* FIXME: check init bytes */
+            uint8_t *tmp;
+            of_match_v%(version)d_t wire_match;
+            of_match_v%(version)d_init(&wire_match,
+                   %(ver_name)s, -1, 1);
+            of_object_buffer_bind((of_object_t *)&wire_match, 
+                octets->data, octets->bytes, NULL);
+            OF_TRY(of_match_v%(version)d_to_match(&wire_match, match));
+
+            /* Free the wire buffer control block without freeing
+             * octets->bytes. */
+            of_wire_buffer_steal(wire_match.wire_object.wbuf, &tmp);
+        }
+        break;
+""" % dict(version=version, ver_name=of_g.of_version_wire2name[version]))
+
+    out.write("""
+    default:
+        return OF_ERROR_COMPAT;
+    }
+
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_match_comp(out=sys.stdout):
+    """
+    Generate match comparison functions
+    """
+    out.write("""
+/**
+ * Determine "more specific" relationship between mac addrs
+ * @return true if v1 is equal to or more specific than v2
+ *
+ * @todo Could be optimized
+ *
+ * Check: Every bit in v2 is set in v1; v1 may have add'l bits set.
+ * That is, return false if there is a bit set in v2 and not in v1.
+ */
+
+static inline int
+of_more_specific_ipv6(of_ipv6_t *v1, of_ipv6_t *v2) {
+    int idx;
+
+    for (idx = 0; idx < OF_IPV6_BYTES; idx++) {
+        /* If there's a bit set in v2 that is clear in v1, return false */
+        if (~v1->addr[idx] & v2->addr[idx]) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+/**
+ * Boolean test if two values agree when restricted to a mask
+ */
+
+static inline int
+of_restricted_match_ipv6(of_ipv6_t *v1, of_ipv6_t *v2, of_ipv6_t *mask) {
+    int idx;
+
+    for (idx = 0; idx < OF_IPV6_BYTES; idx++) {
+        if ((v1->addr[idx] & mask->addr[idx]) != 
+               (v2->addr[idx] & mask->addr[idx])) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+/**
+ * Boolean test if two values "overlap" (agree on common masks)
+ */
+
+static inline int
+of_overlap_ipv6(of_ipv6_t *v1, of_ipv6_t *v2,
+                         of_ipv6_t *m1, of_ipv6_t *m2) {
+    int idx;
+
+    for (idx = 0; idx < OF_IPV6_BYTES; idx++) {
+        if (((v1->addr[idx] & m1->addr[idx]) & m2->addr[idx]) != 
+               ((v2->addr[idx] & m1->addr[idx]) & m2->addr[idx])) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+#define OF_MORE_SPECIFIC_IPV6(v1, v2) of_more_specific_ipv6((v1), (v2))
+
+#define OF_RESTRICTED_MATCH_IPV6(v1, v2, mask) \\
+    of_restricted_match_ipv6((v1), (v2), (mask))
+
+#define OF_OVERLAP_IPV6(v1, v2, m1, m2) of_overlap_ipv6((v1), (v2), (m1), (m2))
+
+/**
+ * Determine "more specific" relationship between mac addrs
+ * @return true if v1 is equal to or more specific than v2
+ *
+ * @todo Could be optimized
+ *
+ * Check: Every bit in v2 is set in v1; v1 may have add'l bits set.
+ * That is, return false if there is a bit set in v2 and not in v1.
+ */
+static inline int
+of_more_specific_mac_addr(of_mac_addr_t *v1, of_mac_addr_t *v2) {
+    int idx;
+
+    for (idx = 0; idx < OF_MAC_ADDR_BYTES; idx++) {
+        /* If there's a bit set in v2 that is clear in v1, return false */
+        if (~v1->addr[idx] & v2->addr[idx]) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+/**
+ * Boolean test if two values agree when restricted to a mask
+ */
+static inline int
+of_restricted_match_mac_addr(of_mac_addr_t *v1, of_mac_addr_t *v2, 
+                             of_mac_addr_t *mask) {
+    int idx;
+
+    for (idx = 0; idx < OF_MAC_ADDR_BYTES; idx++) {
+        if ((v1->addr[idx] & mask->addr[idx]) != 
+               (v2->addr[idx] & mask->addr[idx])) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+/**
+ * Boolean test if two values "overlap" (agree on common masks)
+ */
+
+static inline int
+of_overlap_mac_addr(of_mac_addr_t *v1, of_mac_addr_t *v2,
+                         of_mac_addr_t *m1, of_mac_addr_t *m2) {
+    int idx;
+
+    for (idx = 0; idx < OF_MAC_ADDR_BYTES; idx++) {
+        if (((v1->addr[idx] & m1->addr[idx]) & m2->addr[idx]) != 
+               ((v2->addr[idx] & m1->addr[idx]) & m2->addr[idx])) {
+            return 0;
+        }
+    }
+
+    return 1;
+}
+
+#define OF_MORE_SPECIFIC_MAC_ADDR(v1, v2) of_more_specific_mac_addr((v1), (v2))
+
+#define OF_RESTRICTED_MATCH_MAC_ADDR(v1, v2, mask) \\
+    of_restricted_match_mac_addr((v1), (v2), (mask))
+
+#define OF_OVERLAP_MAC_ADDR(v1, v2, m1, m2) \\
+    of_overlap_mac_addr((v1), (v2), (m1), (m2))
+
+/**
+ * More-specific-than macro for integer types; see above
+ * @return true if v1 is equal to or more specific than v2
+ *
+ * If there is a bit that is set in v2 and not in v1, return false.
+ */
+#define OF_MORE_SPECIFIC_INT(v1, v2) (!(~(v1) & (v2)))
+
+/**
+ * Boolean test if two values agree when restricted to a mask
+ */
+#define OF_RESTRICTED_MATCH_INT(v1, v2, mask) \\
+   (((v1) & (mask)) == ((v2) & (mask)))
+
+
+#define OF_OVERLAP_INT(v1, v2, m1, m2) \\
+    ((((v1) & (m1)) & (m2)) == (((v2) & (m1)) & (m2)))
+""")
+
+    out.write("""
+/**
+ * Compare two match structures for exact equality
+ *
+ * We just do memcmp assuming structs were memset to 0 on init
+ */
+static inline int
+of_match_eq(of_match_t *match1, of_match_t *match2)
+{
+    return (MEMCMP(match1, match2, sizeof(of_match_t)) == 0);
+}
+
+/**
+ * Is the entry match more specific than (or equal to) the query match?
+ * @param entry Match expected to be more specific (subset of query)
+ * @param query Match expected to be less specific (superset of entry)
+ * @returns Boolean, see below
+ *
+ * The assumption is that a query is being done for a non-strict
+ * match against an entry in a table.  The result is true if the
+ * entry match indicates a more specific (but compatible) flow space
+ * specification than that in the query match.  This means that the
+ * values agree between the two where they overlap, and that each mask
+ * for the entry is more specific than that of the query.
+ *
+ * The query has the less specific mask (fewer mask bits) so it is
+ * used for the mask when checking values.
+ */
+
+static inline int
+of_match_more_specific(of_match_t *entry, of_match_t *query)
+{
+    of_match_fields_t *q_m, *e_m;  /* Short hand for masks, fields */
+    of_match_fields_t *q_f, *e_f;
+
+    q_m = &query->masks;
+    e_m = &entry->masks;
+    q_f = &query->fields;
+    e_f = &entry->fields;
+""")
+    for key, entry in match.of_match_members.items():
+        q_m = "&q_m->%s" % key
+        e_m = "&e_m->%s" % key
+        q_f = "&q_f->%s" % key
+        e_f = "&e_f->%s" % key
+        if entry["m_type"] == "of_ipv6_t":
+            comp = "OF_MORE_SPECIFIC_IPV6"
+            match_type = "OF_RESTRICTED_MATCH_IPV6"
+        elif entry["m_type"] == "of_mac_addr_t":
+            comp = "OF_MORE_SPECIFIC_MAC_ADDR"
+            match_type = "OF_RESTRICTED_MATCH_MAC_ADDR"
+        else: # Integer
+            comp = "OF_MORE_SPECIFIC_INT"
+            match_type = "OF_RESTRICTED_MATCH_INT"
+            q_m = "q_m->%s" % key
+            e_m = "e_m->%s" % key
+            q_f = "q_f->%s" % key
+            e_f = "e_f->%s" % key
+        out.write("""
+    /* Mask and values for %(key)s */
+    if (!%(comp)s(%(e_m)s, %(q_m)s)) {
+        return 0;
+    }
+    if (!%(match_type)s(%(e_f)s, %(q_f)s,
+            %(q_m)s)) {
+        return 0;
+    }
+""" % dict(match_type=match_type, comp=comp, q_f=q_f, e_f=e_f, 
+           q_m=q_m, e_m=e_m, key=key))
+
+    out.write("""
+    return 1;
+}
+""")
+
+    out.write("""
+
+/**
+ * Do two entries overlap?
+ * @param match1 One match struct
+ * @param match2 Another match struct
+ * @returns Boolean: true if there is a packet that would match both
+ *
+ */
+
+static inline int
+of_match_overlap(of_match_t *match1, of_match_t *match2)
+{
+    of_match_fields_t *m1, *m2;  /* Short hand for masks, fields */
+    of_match_fields_t *f1, *f2;
+
+    m1 = &match1->masks;
+    m2 = &match2->masks;
+    f1 = &match1->fields;
+    f2 = &match2->fields;
+""")
+    for key, entry in match.of_match_members.items():
+        m1 = "&m1->%s" % key
+        m2 = "&m2->%s" % key
+        f1 = "&f1->%s" % key
+        f2 = "&f2->%s" % key
+        if entry["m_type"] == "of_ipv6_t":
+            check = "OF_OVERLAP_IPV6"
+        elif entry["m_type"] == "of_mac_addr_t":
+            check = "OF_OVERLAP_MAC_ADDR"
+        else: # Integer
+            check = "OF_OVERLAP_INT"
+            m1 = "m1->%s" % key
+            m2 = "m2->%s" % key
+            f1 = "f1->%s" % key
+            f2 = "f2->%s" % key
+        out.write("""
+    /* Check overlap for %(key)s */
+    if (!%(check)s(%(f1)s, %(f2)s, 
+        %(m2)s, %(m1)s)) {
+        return 0; /* This field differentiates; all done */
+    }
+""" % dict(check=check, f1=f1, f2=f2, m1=m1, m2=m2, key=key))
+
+    out.write("""
+    return 1; /* No field differentiates matches */
+}
+""")
+
+def gen_match_conversions(out=sys.stdout):
+    match.match_sanity_check()
+    gen_wc_convert_literal(out)
+    out.write("""
+/**
+ * IP Mask map.  IP maks wildcards from OF 1.0 are interpretted as
+ * indices into the map below.
+ */
+
+int of_ip_mask_map_init_done = 0;
+uint32_t of_ip_mask_map[OF_IP_MASK_MAP_COUNT];
+void
+of_ip_mask_map_init(void)
+{
+    int idx;
+
+    MEMSET(of_ip_mask_map, 0, sizeof(of_ip_mask_map));
+    for (idx = 0; idx < 32; idx++) {
+        of_ip_mask_map[idx] = ~((1U << idx) - 1);
+    }
+
+    of_ip_mask_map_init_done = 1;
+}
+
+/**
+ * @brief Set non-default IP mask for given index
+ */
+int
+of_ip_mask_map_set(int index, uint32_t mask)
+{
+    OF_IP_MASK_INIT_CHECK;
+
+    if ((index < 0) || (index >= OF_IP_MASK_MAP_COUNT)) {
+        return OF_ERROR_RANGE;
+    }
+    of_ip_mask_map[index] = mask;
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * @brief Get a non-default IP mask for given index
+ */
+int
+of_ip_mask_map_get(int index, uint32_t *mask)
+{
+    OF_IP_MASK_INIT_CHECK;
+
+    if ((mask == NULL) || (index < 0) || (index >= OF_IP_MASK_MAP_COUNT)) {
+        return OF_ERROR_RANGE;
+    }
+    *mask = of_ip_mask_map[index];
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * @brief Return the index (used as the WC field in 1.0 match) given the mask
+ */
+
+int
+of_ip_mask_to_index(uint32_t mask)
+{
+    int idx;
+
+    OF_IP_MASK_INIT_CHECK;
+
+    /* Handle most common cases directly */
+    if ((mask == 0) && (of_ip_mask_map[63] == 0)) {
+        return 63;
+    }
+    if ((mask == 0xffffffff) && (of_ip_mask_map[0] == 0xffffffff)) {
+        return 0;
+    }
+
+    for (idx = 0; idx < OF_IP_MASK_MAP_COUNT; idx++) {
+        if (mask == of_ip_mask_map[idx]) {
+            return idx;
+        }
+    }
+
+    LOCI_LOG_INFO("OF 1.0: Could not map IP addr mask 0x%x", mask);
+    return 0x3f;
+}
+
+/**
+ * @brief Return the mask for the given index
+ */
+
+uint32_t
+of_ip_index_to_mask(int index)
+{
+    OF_IP_MASK_INIT_CHECK;
+
+    if (index >= OF_IP_MASK_MAP_COUNT) {
+        LOCI_LOG_INFO("IP index to map: bad index %d", index);
+        return 0;
+    }
+
+    return of_ip_mask_map[index];
+}
+
+""")
+
+    gen_unified_match_to_v1(out)
+    gen_unified_match_to_v2(out)
+    gen_unified_match_to_v3(out)
+    gen_v1_to_unified_match(out)
+    gen_v2_to_unified_match(out)
+    gen_v3_to_unified_match(out)
+    return
diff --git a/c_gen/c_show_gen.py b/c_gen/c_show_gen.py
new file mode 100644
index 0000000..5dab038
--- /dev/null
+++ b/c_gen/c_show_gen.py
@@ -0,0 +1,268 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Show function generation
+
+Generates show function files.
+
+"""
+
+import sys
+import of_g
+import loxi_front_end.match as match
+import loxi_front_end.flags as flags
+from generic_utils import *
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.identifiers as identifiers
+from c_test_gen import var_name_map
+
+def gen_obj_show_h(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Header file for object showing. 
+ */
+
+/**
+ * Show  object declarations
+ *
+ * Routines that emit a human-readable dump of each object.
+ *
+ */
+
+#if !defined(_LOCI_OBJ_SHOW_H_)
+#define _LOCI_OBJ_SHOW_H_
+
+#include <loci/loci.h>
+#include <stdio.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+
+/**
+ * Show any OF object. 
+ */
+int of_object_show(loci_writer_f writer, void* cookie, of_object_t* obj); 
+
+
+
+
+
+
+""")
+
+    type_to_emitter = dict(
+
+        )
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""\
+int %(cls)s_%(ver_name)s_show(loci_writer_f writer, void* cookie, %(cls)s_t *obj);
+""" % dict(cls=cls, ver_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+#endif /* _LOCI_OBJ_SHOW_H_ */
+""")
+
+def gen_obj_show_c(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Source file for object showing. 
+ * 
+ */
+
+#define DISABLE_WARN_UNUSED_RESULT
+#include <loci/loci.h>
+#include <loci/loci_show.h>
+#include <loci/loci_obj_show.h>
+
+static int
+unknown_show(loci_writer_f writer, void* cookie, of_object_t *obj)
+{
+    return writer(cookie, "Unable to print object of type %d, version %d\\n", 
+                         obj->object_id, obj->version);
+}    
+""")
+
+    for version in of_g.of_version_range:
+        ver_name = loxi_utils.version_to_name(version)
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""
+int
+%(cls)s_%(ver_name)s_show(loci_writer_f writer, void* cookie, %(cls)s_t *obj)
+{
+    int out = 0;
+""" % dict(cls=cls, ver_name=ver_name))
+
+            members, member_types = loxi_utils.all_member_types_get(cls, version)
+            for m_type in member_types:
+                if loxi_utils.type_is_scalar(m_type) or m_type in \
+                        ["of_match_t", "of_octets_t"]:
+                    # Declare instance of these
+                    out.write("    %s %s;\n" % (m_type, var_name_map(m_type)))
+                else:
+                    out.write("""
+    %(m_type)s %(v_name)s;
+"""  % dict(m_type=m_type, v_name=var_name_map(m_type)))
+                    if loxi_utils.class_is_list(m_type):
+                        base_type = loxi_utils.list_to_entry_type(m_type)
+                        out.write("    %s elt;\n    int rv;\n" % base_type)
+            for member in members:
+                m_type = member["m_type"]
+                m_name = member["name"]
+                #emitter = "LOCI_SHOW_" + loxi_utils.type_to_short_name(m_type)
+                emitter = "LOCI_SHOW_" + loxi_utils.type_to_short_name(m_type) + "_" + m_name; 
+                if loxi_utils.skip_member_name(m_name):
+                    continue
+                if (loxi_utils.type_is_scalar(m_type) or
+                    m_type in ["of_match_t", "of_octets_t"]):
+                    out.write("""
+    %(cls)s_%(m_name)s_get(obj, &%(v_name)s);
+    out += writer(cookie, "%(m_name)s=");
+    out += %(emitter)s(writer, cookie, %(v_name)s);
+    out += writer(cookie, " "); 
+""" % dict(cls=cls, m_name=m_name, m_type=m_type,
+           v_name=var_name_map(m_type), emitter=emitter))
+                elif loxi_utils.class_is_list(m_type):
+                    sub_cls = m_type[:-2] # Trim _t
+                    elt_type = loxi_utils.list_to_entry_type(m_type)
+                    out.write("""
+    out += writer(cookie, "%(elt_type)s={ ");
+    %(cls)s_%(m_name)s_bind(obj, &%(v_name)s);
+    %(u_type)s_ITER(&%(v_name)s, &elt, rv) {
+        of_object_show(writer, cookie, (of_object_t *)&elt);
+    }
+    out += writer(cookie, "} "); 
+""" % dict(sub_cls=sub_cls, u_type=sub_cls.upper(), v_name=var_name_map(m_type),
+           elt_type=elt_type, cls=cls, m_name=m_name, m_type=m_type))
+                else:
+                    sub_cls = m_type[:-2] # Trim _t
+                    out.write("""
+    %(cls)s_%(m_name)s_bind(obj, &%(v_name)s);
+    out += %(sub_cls)s_%(ver_name)s_show(writer, cookie, &%(v_name)s);
+""" % dict(cls=cls, sub_cls=sub_cls, m_name=m_name, 
+           v_name=var_name_map(m_type), ver_name=ver_name))
+
+            out.write("""
+    return out;
+}
+""")
+    out.write("""
+/**
+ * Log a match entry
+ */
+int
+loci_show_match(loci_writer_f writer, void* cookie, of_match_t *match)
+{
+    int out = 0;
+""")
+
+    for key, entry in match.of_match_members.items():
+        m_type = entry["m_type"]
+        #emitter = "LOCI_SHOW_" + loxi_utils.type_to_short_name(m_type)
+        emitter = "LOCI_SHOW_" + loxi_utils.type_to_short_name(m_type) + "_" + key; 
+        out.write("""
+    if (OF_MATCH_MASK_%(ku)s_ACTIVE_TEST(match)) {
+        out += writer(cookie, "%(key)s active="); 
+        out += %(emitter)s(writer, cookie, match->fields.%(key)s);
+        out += writer(cookie, "/"); 
+        out += %(emitter)s(writer, cookie, match->masks.%(key)s);
+        out += writer(cookie, " ");
+    }
+""" % dict(key=key, ku=key.upper(), emitter=emitter, m_type=m_type))
+
+    out.write("""
+    return out;
+}
+""")
+
+    # Generate big table indexed by version and object
+    for version in of_g.of_version_range:
+        out.write("""
+static loci_obj_show_f show_funs_v%(version)s[OF_OBJECT_COUNT] = {
+""" % dict(version=version))
+        out.write("    unknown_show, /* of_object, not a valid specific type */\n")
+        for j, cls in enumerate(of_g.all_class_order):
+            comma = ""
+            if j < len(of_g.all_class_order) - 1: # Avoid ultimate comma
+                comma = ","
+
+            if (not loxi_utils.class_in_version(cls, version) or 
+                    cls in type_maps.inheritance_map):
+                out.write("    unknown_show%s\n" % comma);
+            else:
+                out.write("    %s_%s_show%s\n" % 
+                          (cls, loxi_utils.version_to_name(version), comma))
+        out.write("};\n\n")
+
+    out.write("""
+static loci_obj_show_f *show_funs[5] = {
+    NULL,
+    show_funs_v1,
+    show_funs_v2,
+    show_funs_v3,
+    show_funs_v4
+};
+
+int
+of_object_show(loci_writer_f writer, void* cookie, of_object_t *obj)
+{
+    if ((obj->object_id > 0) && (obj->object_id < OF_OBJECT_COUNT)) {
+        if (((obj)->version > 0) && ((obj)->version <= OF_VERSION_1_2)) {
+            /* @fixme VERSION */
+            return show_funs[obj->version][obj->object_id](writer, cookie, (of_object_t *)obj);
+        } else {
+            return writer(cookie, "Bad version %d\\n", obj->version);
+        }
+    }
+    return writer(cookie, "Bad object id %d\\n", obj->object_id);
+}
+""")
+
diff --git a/c_gen/c_test_gen.py b/c_gen/c_test_gen.py
new file mode 100644
index 0000000..97cc78d
--- /dev/null
+++ b/c_gen/c_test_gen.py
@@ -0,0 +1,1964 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Test case generation functions
+
+@fixme Update the following
+The following components are generated.
+
+test_common.[ch]:  A collection of common code for tests.  Currently
+this includes the ability to set the scalar members of an object with
+incrementing values and then similarly verify those values
+
+test_scalar_acc.c: Instantiate each type of object, then set and get
+scalar values in the objects.
+
+test_list.c: Instantiate each type of list, add an element of each
+type the list supports, setting scalar values of the elements.
+
+test_match.c: Various tests for match objects
+
+test_msg.c: Instantiate top level messages
+
+These will move towards unified tests that do the following:
+
+Create or init an object.
+Populate the object with incrementing values.
+Possibly transform the object in some way (e.g., run the underlying
+wire buffer through a parse routine).
+Verify that the members all have the appropriate value
+
+Through out, checking the consistency of memory and memory operations
+is done with mcheck (not supported on Mac OS X).
+
+"""
+
+import sys
+import of_g
+import loxi_front_end.match as match
+import loxi_front_end.flags as flags
+from generic_utils import *
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.identifiers as identifiers
+
+def var_name_map(m_type):
+    """
+    Map a type to a generic variable name for the type.
+    @param m_type The data type
+
+    Used mostly in test code generation, but also for the dup functions.
+    """
+    _var_name_map= dict(
+        uint8_t="val8",
+        uint16_t="val16",
+        uint32_t="val32",
+        uint64_t="val64",
+        of_port_no_t="port_no",
+        of_fm_cmd_t="fm_cmd",
+        of_wc_bmap_t="wc_bmap",
+        of_match_bmap_t = "match_bmap",
+        of_port_name_t="port_name", 
+        of_table_name_t="table_name",
+        of_desc_str_t="desc_str",
+        of_serial_num_t="ser_num", 
+        of_mac_addr_t="mac_addr", 
+        of_ipv6_t="ipv6",
+        # Non-scalars; more TBD
+        of_octets_t="octets",
+        of_meter_features_t="features",
+        of_match_t="match")
+
+    if m_type.find("of_list_") == 0:
+        return "list"
+    if m_type in of_g.of_mixed_types:
+        return of_g.of_mixed_types[m_type]["short_name"]
+    return _var_name_map[m_type]
+
+integer_types = ["uint8_t", "uint16_t", "uint32_t", "uint64_t",
+                 "of_port_no_t", "of_fm_cmd_t", "of_wc_bmap_t",
+                 "of_match_bmap_t"]
+string_types = [ "of_port_name_t", "of_table_name_t",
+                "of_desc_str_t", "of_serial_num_t", "of_mac_addr_t", 
+                "of_ipv6_t"]
+
+scalar_types = integer_types[:]
+scalar_types.extend(string_types)
+
+def ignore_member(cls, version, m_name, m_type):
+    """
+    Filter out names or types that either don't have accessors
+    or those that should not be messed with
+    or whose types we're not ready to deal with yet.
+    """
+    # This will probably need more granularity as more extensions are added
+    if (type_maps.class_is_extension(cls, version) and (
+            m_name == "experimenter" or
+            m_name == "subtype")):
+        return True
+    return loxi_utils.skip_member_name(m_name) or m_type not in scalar_types
+
+def gen_fill_string(out):
+    out.write("""
+
+/**
+ * The increment to use on values inside a string
+ */
+#define OF_TEST_STR_INCR 3
+
+/**
+ * Fill in a buffer with incrementing values starting
+ * at the given offset with the given value
+ * @param buf The buffer to fill
+ * @param value The value to use for data
+ * @param len The number of bytes to fill
+ */
+
+void
+of_test_str_fill(uint8_t *buf, int value, int len)
+{
+    int i;
+
+    for (i = 0; i < len; i++) {
+        *buf = value;
+        value += OF_TEST_STR_INCR;
+        buf++;
+    }
+}
+
+/**
+ * Given a buffer, verify that it's filled as above
+ * @param buf The buffer to check
+ * @param value The value to use for data
+ * @param len The number of bytes to fill
+ * @return Boolean True on equality (success)
+ */
+
+int
+of_test_str_check(uint8_t *buf, int value, int len)
+{
+    int i;
+    uint8_t val8;
+
+    val8 = value;
+
+    for (i = 0; i < len; i++) {
+        if (*buf != val8) {
+            return 0;
+        }
+        val8 += OF_TEST_STR_INCR;
+        buf++;
+    }
+
+    return 1;
+}
+
+/**
+ * Global that determines how octets should be populated
+ * -1 means use value % MAX (below) to determine length
+ * 0, 1, ... means used that fixed length
+ *
+ * Note: Was 16K, but that made objects too big.  May add flexibility
+ * to call populate with a max parameter for length
+ */
+int octets_pop_style = -1;
+#define OCTETS_MAX_VALUE (128) /* 16K was too big */
+#define OCTETS_MULTIPLIER 6367 /* A prime */
+
+int
+of_octets_populate(of_octets_t *octets, int value)
+{
+    if (octets_pop_style < 0) {
+        octets->bytes = (value * OCTETS_MULTIPLIER) % OCTETS_MAX_VALUE;
+    } else {
+        octets->bytes = octets_pop_style;
+    }
+
+    if (octets->bytes != 0) {
+        if ((octets->data = (uint8_t *)MALLOC(octets->bytes)) == NULL) {
+            return 0;
+        }
+        of_test_str_fill(octets->data, value, octets->bytes);
+        value += 1;
+    }
+
+    return value;
+}
+
+int
+of_octets_check(of_octets_t *octets, int value)
+{
+    int len;
+
+    if (octets_pop_style < 0) {
+        len =  (value * OCTETS_MULTIPLIER) % OCTETS_MAX_VALUE;
+        TEST_ASSERT(octets->bytes == len);
+    } else {
+        TEST_ASSERT(octets->bytes == octets_pop_style);
+    }
+
+    if (octets->bytes != 0) {
+        TEST_ASSERT(of_test_str_check(octets->data, value, octets->bytes)
+            == 1);
+        value += 1;
+    }
+
+    return value;
+}
+
+int
+of_match_populate(of_match_t *match, of_version_t version, int value)
+{
+    MEMSET(match, 0, sizeof(*match));
+    match->version = version;
+""")
+
+    for key, entry in match.of_match_members.items():
+        out.write("""
+    if (!(of_match_incompat[version] & 
+            OF_OXM_BIT(OF_OXM_INDEX_%(ku)s))) {
+        OF_MATCH_MASK_%(ku)s_EXACT_SET(match);
+        VAR_%(u_type)s_INIT(match->fields.%(key)s, value);
+        value += 1;
+    }
+
+""" % dict(key=key, u_type=entry["m_type"].upper(), ku=key.upper()))
+
+    out.write("""
+    if (value % 2) {
+        /* Sometimes set ipv4 addr masks to non-exact */
+        match->masks.ipv4_src = 0xffff0000;
+        match->masks.ipv4_dst = 0xfffff800;
+    }
+    return value;
+}
+
+int
+of_match_check(of_match_t *match, of_version_t version, int value)
+{
+    of_match_t check;
+
+    value = of_match_populate(&check, match->version, value);
+    TEST_ASSERT(value != 0);
+    TEST_ASSERT(MEMCMP(match, &check, sizeof(check)) == 0);
+
+    return value;
+}
+""")
+
+def gen_common_test_header(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/*
+ * Test header file
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ */
+
+#if !defined(_TEST_COMMON_H_)
+#define _TEST_COMMON_H_
+
+#define DISABLE_WARN_UNUSED_RESULT
+#include <loci/loci.h>
+#include <locitest/of_dup.h>
+#include <locitest/unittest.h>
+
+extern int global_error;
+extern int exit_on_error;
+
+/* @todo Make option for -k to continue tests if errors */
+#define RUN_TEST(test) do {                                             \\
+        int rv;                                                         \\
+        TESTCASE(test, rv);                                             \\
+        if (rv != TEST_PASS) {                                          \\
+            global_error=1;                                             \\
+            if (exit_on_error) return(1);                               \\
+        }                                                               \\
+    } while(0)
+
+#define TEST_OK(op) TEST_ASSERT((op) == OF_ERROR_NONE)
+#define TEST_INDIGO_OK(op) TEST_ASSERT((op) == INDIGO_ERROR_NONE)
+
+/*
+ * Declarations of functions to populate scalar values in a a class
+ */
+
+extern void of_test_str_fill(uint8_t *buf, int value, int len);
+extern int of_test_str_check(uint8_t *buf, int value, int len);
+
+
+extern int of_octets_populate(of_octets_t *octets, int value);
+extern int of_octets_check(of_octets_t *octets, int value);
+extern int of_match_populate(of_match_t *match, of_version_t version,
+                             int value);
+extern int of_match_check(of_match_t *match, of_version_t version, int value);
+extern int test_ident_macros(void);
+extern int test_dump_objs(void);
+
+/* In test_match_utils.c */
+extern int test_match_utils(void);
+
+extern int run_unified_accessor_tests(void);
+extern int run_match_tests(void);
+extern int run_utility_tests(void);
+
+extern int run_scalar_acc_tests(void);
+extern int run_list_tests(void);
+extern int run_message_tests(void);
+extern int run_setup_from_add_tests(void);
+
+extern int run_validator_tests(void);
+
+extern int run_list_limits_tests(void);
+
+extern int test_ext_objs(void);
+
+""")
+
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""
+extern int %(cls)s_%(v_name)s_populate(
+    %(cls)s_t *obj, int value);
+extern int %(cls)s_%(v_name)s_check(
+    %(cls)s_t *obj, int value);
+extern int %(cls)s_%(v_name)s_populate_scalars(
+    %(cls)s_t *obj, int value);
+extern int %(cls)s_%(v_name)s_check_scalars(
+    %(cls)s_t *obj, int value);
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+/*
+ * Declarations for list population and check primitives
+ */
+""")
+ 
+    for version in of_g.of_version_range:
+        for cls in of_g.ordered_list_objects:
+            if cls in type_maps.inheritance_map:
+                continue
+
+            if version in of_g.unified[cls]:
+               out.write("""
+extern int
+    list_setup_%(cls)s_%(v_name)s(
+    %(cls)s_t *list, int value);
+extern int
+    list_check_%(cls)s_%(v_name)s(
+    %(cls)s_t *list, int value);
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    out.write("\n#endif /* _TEST_COMMON_H_ */\n")
+
+def gen_common_test(out, name):
+    """
+    Generate common test content including main
+    """
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/*
+ * Common test code for LOCI
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ */
+
+#define DISABLE_WARN_UNUSED_RESULT
+#include "loci_log.h"
+#include <loci/loci_obj_dump.h>
+#include <locitest/unittest.h>
+#include <locitest/test_common.h>
+
+#if !defined(__APPLE__)
+#include <mcheck.h>
+#define MCHECK_INIT mcheck(NULL)
+#else /* mcheck not available under OS X */
+#define MCHECK_INIT do { } while (0)
+#endif
+
+/**
+ * Exit on error if set to 1
+ */
+int exit_on_error = 1;
+
+/**
+ * Global error state: 0 is okay, 1 is error 
+ */
+int global_error = 0;
+
+extern int run_unified_accessor_tests(void);
+extern int run_match_tests(void);
+extern int run_utility_tests(void);
+
+extern int run_scalar_acc_tests(void);
+extern int run_list_tests(void);
+extern int run_message_tests(void);
+
+/**
+ * Macros for initializing and checking scalar types
+ *
+ * @param var The variable being initialized or checked
+ * @param val The integer value to set/check against, see below
+ *
+ * Note that equality means something special for strings.  Each byte
+ * is initialized to an incrementing value.  So check is done against that.
+ *
+ */
+
+""")
+    for t in scalar_types:
+        if t in integer_types:
+            out.write("""
+#define VAR_%s_INIT(var, val) var = (%s)(val)
+#define VAR_%s_CHECK(var, val) ((var) == (%s)(val))
+""" % (t.upper(), t, t.upper(), t))
+        else:
+            out.write("""
+#define VAR_%s_INIT(var, val) \\
+    of_test_str_fill((uint8_t *)&(var), val, sizeof(var))
+#define VAR_%s_CHECK(var, val) \\
+    of_test_str_check((uint8_t *)&(var), val, sizeof(var))
+""" % (t.upper(), t.upper()))
+
+    gen_fill_string(out)
+    gen_scalar_set_check_funs(out)
+    gen_list_set_check_funs(out)
+    gen_unified_accessor_funs(out)
+
+    gen_ident_tests(out)
+    gen_log_test(out)
+
+def gen_message_scalar_test(out, name):
+    """
+    Generate test cases for message objects, scalar accessors
+    """
+
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Message-scalar tests for all versions
+ */
+
+#include <locitest/test_common.h>
+""")
+    for version in of_g.of_version_range:
+        v_name = loxi_utils.version_to_name(version)
+        out.write("""
+/**
+ * Message-scalar tests for version %s
+ */
+""" % v_name)
+        for cls in of_g.standard_class_order:
+            if cls in type_maps.inheritance_map:
+                continue
+            if version in of_g.unified[cls]:
+                message_scalar_test(out, version, cls)
+
+    out.write("""
+int
+run_scalar_acc_tests(void)
+{
+""")
+    for version in of_g.of_version_range:
+        v_name = loxi_utils.version_to_name(version)
+        for cls in of_g.standard_class_order:
+            if cls in type_maps.inheritance_map:
+                continue
+            if version in of_g.unified[cls]:
+                test_name = "%s_%s" % (cls, v_name)
+                out.write("    RUN_TEST(%s_scalar);\n" % test_name)
+
+    out.write("    return TEST_PASS;\n}\n");
+    
+def message_scalar_test(out, version, cls):
+    """
+    Generate one test case for the given version and class
+    """
+
+    members, member_types = scalar_member_types_get(cls, version)
+    length = of_g.base_length[(cls, version)]
+    v_name = loxi_utils.version_to_name(version)
+
+    out.write("""
+static int
+test_%(cls)s_%(v_name)s_scalar(void)
+{
+    %(cls)s_t *obj;
+
+    obj = %(cls)s_new(%(v_name)s);
+    TEST_ASSERT(obj != NULL);
+    TEST_ASSERT(obj->version == %(v_name)s);
+    TEST_ASSERT(obj->length == %(length)d);
+    TEST_ASSERT(obj->parent == NULL);
+    TEST_ASSERT(obj->object_id == %(u_cls)s);
+""" % dict(cls=cls, u_cls=cls.upper(), 
+           v_name=v_name, length=length, version=version))
+    if not type_maps.class_is_virtual(cls):
+        out.write("""
+    if (obj->wire_length_get != NULL) {
+        int length;
+
+        obj->wire_length_get((of_object_t *)obj, &length);
+        TEST_ASSERT(length == %(length)d);
+    }
+
+    /* Set up incrementing values for scalar members */
+    %(cls)s_%(v_name)s_populate_scalars(obj, 1);
+
+    /* Check values just set */
+    TEST_ASSERT(%(cls)s_%(v_name)s_check_scalars(obj, 1) != 0);
+""" % dict(cls=cls, u_cls=cls.upper(), 
+           v_name=v_name, length=length, version=version))
+
+    out.write("""
+    %(cls)s_delete(obj);
+
+    /* To do: Check memory */
+    return TEST_PASS;
+}
+""" % dict(cls=cls))
+
+# Get the members and list of scalar types for members of a given class
+def scalar_member_types_get(cls, version):
+    member_types = []
+
+    if not version in of_g.unified[cls]:
+        return ([], [])
+
+    if "use_version" in of_g.unified[cls][version]:
+        v = of_g.unified[cls][version]["use_version"]
+        members = of_g.unified[cls][v]["members"]
+    else:
+        members = of_g.unified[cls][version]["members"]
+    # Accumulate variables that are supported
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if (not loxi_utils.type_is_scalar(m_type) or 
+            ignore_member(cls, version, m_name, m_type)):
+            continue
+        if not m_type in member_types:
+            member_types.append(m_type)
+
+    return (members, member_types)
+
+def scalar_funs_instance(out, cls, version, members, member_types):
+    """
+    Generate one instance of scalar set/check functions
+    """
+    out.write("""
+/**
+ * Populate the scalar values in obj of type %(cls)s, 
+ * version %(v_name)s 
+ * @param obj Pointer to an object to populate
+ * @param value The seed value to use in populating the object
+ * @returns The value after increments for this object's values
+ */
+int %(cls)s_%(v_name)s_populate_scalars(
+    %(cls)s_t *obj, int value) {
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    # Declare string types
+    for t in member_types:
+        out.write("    %s %s;\n" % (t, var_name_map(t)))
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if (not loxi_utils.type_is_scalar(m_type) or
+            ignore_member(cls, version, m_name, m_type)):
+            continue
+        v_name = var_name_map(m_type);
+        out.write("""
+    VAR_%(u_type)s_INIT(%(v_name)s, value);
+    %(cls)s_%(m_name)s_set(obj, %(v_name)s);
+    value += 1;
+""" % dict(cls=cls, m_name=m_name, u_type=m_type.upper(), v_name=v_name))
+    out.write("""
+    return value;
+}
+""")
+    
+    out.write("""
+/**
+ * Check scalar values in obj of type %(cls)s, 
+ * version %(v_name)s 
+ * @param obj Pointer to an object to check
+ * @param value Starting value for checking
+ * @returns The value after increments for this object's values
+ */
+int %(cls)s_%(v_name)s_check_scalars(
+    %(cls)s_t *obj, int value) {
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    for t in member_types:
+        out.write("    %s %s;\n" % (t, var_name_map(t)))
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if (not loxi_utils.type_is_scalar(m_type) or
+            ignore_member(cls, version, m_name, m_type)):
+            continue
+        v_name = var_name_map(m_type);
+        out.write("""
+    %(cls)s_%(m_name)s_get(obj, &%(v_name)s);
+    TEST_ASSERT(VAR_%(u_type)s_CHECK(%(v_name)s, value));
+    value += 1;
+""" % dict(cls=cls, m_name=m_name, u_type=m_type.upper(), v_name=v_name))
+
+    out.write("""
+    return value;
+}
+
+""")
+
+def gen_scalar_set_check_funs(out):
+    """
+    For each object class with scalar members, generate functions that 
+    set and check their values
+    """
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            (members, member_types) = scalar_member_types_get(cls, version)
+            scalar_funs_instance(out, cls, version, members, member_types)
+
+
+# Helper function to set up a subclass instance for a test
+def setup_instance(out, cls, subcls, instance, v_name, inst_len, version):
+    base_type = loxi_utils.list_to_entry_type(cls)
+    setup_template = """
+    %(subcls)s_init(%(inst)s, %(v_name)s, -1, 1);
+    %(cls)s_append_bind(list, 
+            (%(base_type)s_t *)%(inst)s);
+    value = %(subcls)s_%(v_name)s_populate(
+        %(inst)s, value);
+    cur_len += %(inst)s->length;
+    TEST_ASSERT(list->length == cur_len);
+"""
+    out.write("""
+    /* Append two instances of type %s */
+""" % subcls)
+    for i in range(2):
+        out.write(setup_template %
+                  dict(inst=instance, subcls=subcls, v_name=v_name, 
+                       base_type=base_type, cls=cls, inst_len=inst_len, 
+                       version=version))
+
+def check_instance(out, cls, subcls, instance, v_name, inst_len, version, last):
+    check_template = ""
+    if inst_len >= 0:
+        check_template = """
+    TEST_ASSERT(%(inst)s->length == %(inst_len)d);
+    if (%(inst)s->wire_length_get != NULL) {
+        int length;
+
+        %(inst)s->wire_length_get(
+            (of_object_t *)&elt, &length);
+        TEST_ASSERT(length == %(inst_len)d);
+    }
+"""
+    check_template += """
+    TEST_ASSERT(%(inst)s->object_id == %(elt_name)s);
+    value = %(subcls)s_%(v_name)s_check(
+        %(inst)s, value);
+    TEST_ASSERT(value != 0);
+"""
+    out.write("\n    /* Check two instances of type %s */" % instance)
+
+    out.write(check_template % 
+              dict(elt_name=loxi_utils.enum_name(subcls), inst_len=inst_len,
+                   inst=instance, subcls=subcls,
+                   v_name=loxi_utils.version_to_name(version)))
+    out.write("""\
+    TEST_OK(%(cls)s_next(list, &elt));
+""" % dict(cls=cls))
+
+    out.write(check_template % 
+              dict(elt_name=loxi_utils.enum_name(subcls), inst_len=inst_len,
+                   inst=instance, subcls=subcls,
+                   v_name=loxi_utils.version_to_name(version)))
+    if last:
+        out.write("""\
+    TEST_ASSERT(%(cls)s_next(list, &elt) == OF_ERROR_RANGE);
+""" % dict(cls=cls))
+    else:
+        out.write("""\
+    TEST_OK(%(cls)s_next(list, &elt));
+""" % dict(cls=cls))
+
+def setup_list_fn(out, version, cls):
+    """
+    Generate a helper function that populates a list with two
+    of each type of subclass it supports
+    """
+    out.write("""
+/**
+ * Set up a list of type %(cls)s with two of each type of subclass
+ */
+int
+list_setup_%(cls)s_%(v_name)s(
+    %(cls)s_t *list, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    base_type = loxi_utils.list_to_entry_type(cls)
+    out.write("""
+    %(base_type)s_t elt;
+    int cur_len = 0;
+""" % dict(cls=cls, base_type=base_type))
+    
+    sub_classes =  type_maps.sub_class_map(base_type, version)
+    v_name = loxi_utils.version_to_name(version)
+
+    if len(sub_classes) == 0:
+        out.write("    /* No subclasses for %s */\n"% base_type)
+        out.write("    %s_t *elt_p;\n" % base_type)
+        out.write("\n    elt_p = &elt;\n")
+    else:
+        out.write("    /* Declare pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s_t *%s;\n" % (subcls, instance))
+        out.write("\n    /* Instantiate pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s = &elt.%s;\n" % (instance, instance))
+
+    if len(sub_classes) == 0: # No inheritance case
+        inst_len = loxi_utils.base_type_to_length(base_type, version)
+        setup_instance(out, cls, base_type, "elt_p", v_name, inst_len, version)
+    else:
+        for instance, subcls in sub_classes:
+            inst_len = of_g.base_length[(subcls, version)]
+            setup_instance(out, cls, subcls, instance, v_name, inst_len, version)
+    out.write("""
+
+    return value;
+}
+""")
+
+def check_list_fn(out, version, cls):
+    """
+    Generate a helper function that checks a list populated by above fn
+    """
+    out.write("""
+/**
+ * Check a list of type %(cls)s generated by 
+ * list_setup_%(cls)s_%(v_name)s
+ */
+int
+list_check_%(cls)s_%(v_name)s(
+    %(cls)s_t *list, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    base_type = loxi_utils.list_to_entry_type(cls)
+    out.write("""
+    %(base_type)s_t elt;
+""" % dict(cls=cls, base_type=base_type))
+    
+    sub_classes =  type_maps.sub_class_map(base_type, version)
+    v_name = loxi_utils.version_to_name(version)
+
+    if len(sub_classes) == 0:
+        out.write("    /* No subclasses for %s */\n"% base_type)
+        out.write("    %s_t *elt_p;\n" % base_type)
+        out.write("\n    elt_p = &elt;\n")
+    else:
+        out.write("    /* Declare pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s_t *%s;\n" % (subcls, instance))
+        out.write("\n    /* Instantiate pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s = &elt.%s;\n" % (instance, instance))
+
+    out.write("    TEST_OK(%(cls)s_first(list, &elt));\n" % dict(cls=cls))
+    if len(sub_classes) == 0: # No inheritance case
+        if loxi_utils.class_is_var_len(base_type, version):
+            inst_len = -1
+        else:
+            inst_len = loxi_utils.base_type_to_length(base_type, version)
+        check_instance(out, cls, base_type, "elt_p", v_name, inst_len, 
+                       version, True)
+    else:
+        count = 0
+        for instance, subcls in sub_classes:
+            count += 1
+            if loxi_utils.class_is_var_len(subcls, version):
+                inst_len = -1
+            else:
+                inst_len = of_g.base_length[(subcls, version)]
+            check_instance(out, cls, subcls, instance, v_name, inst_len, 
+                           version, count==len(sub_classes))
+
+    out.write("""
+    return value;
+}
+""" % dict(base_type=base_type))
+
+def gen_list_set_check_funs(out):
+    for version in of_g.of_version_range:
+        for cls in of_g.ordered_list_objects:
+            if cls in type_maps.inheritance_map:
+                continue
+
+            if version in of_g.unified[cls]:
+                setup_list_fn(out, version, cls)
+                check_list_fn(out, version, cls)
+
+# Maybe: Get a map from list class to parent, mem_name of container
+
+def list_test(out, version, cls):
+    out.write("""
+static int
+test_%(cls)s_%(v_name)s(void)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    base_type = loxi_utils.list_to_entry_type(cls)
+
+    out.write("""    %(cls)s_t *list;
+    int value = 1;
+""" % dict(cls=cls, base_type=base_type))
+
+    out.write("""
+    list = %(cls)s_new(%(v_name)s);
+    TEST_ASSERT(list != NULL);
+    TEST_ASSERT(list->version == %(v_name)s);
+    TEST_ASSERT(list->length == 0);
+    TEST_ASSERT(list->parent == NULL);
+    TEST_ASSERT(list->object_id == %(enum_cls)s);
+
+    value = list_setup_%(cls)s_%(v_name)s(list, value);
+    TEST_ASSERT(value != 0);
+""" % dict(cls=cls, base_type=base_type, v_name=loxi_utils.version_to_name(version), 
+           enum_cls=loxi_utils.enum_name(cls)))
+
+    out.write("""
+    /* Now check values */
+    value = 1;
+    value = list_check_%(cls)s_%(v_name)s(list, value);
+    TEST_ASSERT(value != 0);
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+    %(cls)s_delete(list);
+
+    return TEST_PASS;
+}
+""" % dict(cls=cls))
+
+def gen_list_test(out, name):
+    """
+    Generate base line test cases for lists
+    @param out The file handle to write to
+    """
+
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Message-scalar tests for all versions
+ */
+
+#include <locitest/test_common.h>
+""")
+    
+    for version in of_g.of_version_range:
+        v_name = loxi_utils.version_to_name(version)
+        out.write("""
+/**
+ * Baseline list tests for version %s
+ */
+""" % v_name)
+        for cls in of_g.ordered_list_objects:
+            if cls in type_maps.inheritance_map:
+                continue
+            if version in of_g.unified[cls]:
+                list_test(out, version, cls)
+
+    out.write("""
+int
+run_list_tests(void)
+{
+""")
+    for version in of_g.of_version_range:
+        v_name = loxi_utils.version_to_name(version)
+        for cls in of_g.ordered_list_objects:
+            if cls in type_maps.inheritance_map:
+                continue
+            if version in of_g.unified[cls]:
+                test_name = "%s_%s" % (cls, v_name)
+                out.write("    RUN_TEST(%s);\n" % test_name)
+
+    out.write("\n    return TEST_PASS;\n}\n");
+
+def gen_match_test(out, name):
+    """
+    Generate baseline tests for match functions
+    """
+
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""\
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Message-scalar tests for all versions
+ * @fixme These are mostly hard coded now.
+ */
+
+#include <locitest/test_common.h>
+
+static int
+test_match_1(void)
+{
+    of_match_v1_t *m_v1;
+    of_match_v2_t *m_v2;
+    of_match_v3_t *m_v3;
+    of_match_v4_t *m_v4;
+    of_match_t match;
+    int value = 1;
+    int idx;
+    uint32_t exp_value;
+
+    /* Verify default values for ip mask map */
+    for (idx = 0; idx < OF_IP_MASK_MAP_COUNT; idx++) {
+        exp_value = (idx < 32) ? ~((1 << idx) - 1) : 0;
+        TEST_ASSERT(of_ip_index_to_mask(idx) == exp_value);
+        if (idx < 32) {
+            TEST_ASSERT(of_ip_mask_to_index(exp_value) == idx);
+        }
+    }
+
+    TEST_ASSERT(of_ip_mask_map_set(17, 0xabcdef00) == OF_ERROR_NONE);
+    TEST_ASSERT(of_ip_mask_to_index(0xabcdef00) == 17);
+    TEST_ASSERT(of_ip_index_to_mask(17) == 0xabcdef00);
+
+    TEST_ASSERT(of_ip_mask_map_set(62, 0xabcdefff) == OF_ERROR_NONE);
+    TEST_ASSERT(of_ip_mask_to_index(0xabcdefff) == 62);
+    TEST_ASSERT(of_ip_index_to_mask(62) == 0xabcdefff);
+
+    /* Test re-init */
+    of_ip_mask_map_init();
+    for (idx = 0; idx < OF_IP_MASK_MAP_COUNT; idx++) {
+        exp_value = (idx < 32) ? ~((1 << idx) - 1) : 0;
+        TEST_ASSERT(of_ip_index_to_mask(idx) == exp_value);
+        if (idx < 32) {
+            TEST_ASSERT(of_ip_mask_to_index(exp_value) == idx);
+        }
+    }
+""")
+
+    for version in of_g.of_version_range:
+        out.write("""
+    /* Create/populate/convert and delete for version %(v_name)s */
+    m_v%(version)d = of_match_v%(version)d_new(%(v_name)s);
+    TEST_ASSERT(m_v%(version)d != NULL);
+    TEST_ASSERT((value = of_match_populate(&match, %(v_name)s, value)) > 0);
+    TEST_OK(of_match_to_wire_match_v%(version)d(&match, m_v%(version)d));
+    of_match_v%(version)d_delete(m_v%(version)d);
+""" % dict(v_name=loxi_utils.version_to_name(version), version=version))
+
+    out.write("""
+    return TEST_PASS;
+}
+""")
+
+    out.write("""
+static int
+test_match_2(void)
+{
+    of_match_v1_t *m_v1;
+    of_match_v2_t *m_v2;
+    of_match_v3_t *m_v3;
+    of_match_v3_t *m_v4;
+    of_match_t match1;
+    of_match_t match2;
+    int value = 1;
+""")
+
+    for version in of_g.of_version_range:
+        out.write("""
+    TEST_ASSERT((value = of_match_populate(&match1, %(v_name)s, value)) > 0);
+    m_v%(version)d = of_match_v%(version)d_new(%(v_name)s);
+    TEST_ASSERT(m_v%(version)d != NULL);
+    TEST_OK(of_match_to_wire_match_v%(version)d(&match1, m_v%(version)d));
+    TEST_OK(of_match_v%(version)d_to_match(m_v%(version)d, &match2));
+    TEST_ASSERT(memcmp(&match1, &match2, sizeof(match1)) == 0);
+    of_match_v%(version)d_delete(m_v%(version)d);
+""" % dict(v_name=loxi_utils.version_to_name(version), version=version))
+
+    out.write("""
+    return TEST_PASS;
+}
+""")
+
+    out.write("""
+static int
+test_match_3(void)
+{
+    of_match_t match1;
+    of_match_t match2;
+    int value = 1;
+    of_octets_t octets;
+""")
+    for version in of_g.of_version_range:
+        out.write("""
+    /* Serialize to version %(v_name)s */
+    TEST_ASSERT((value = of_match_populate(&match1, %(v_name)s, value)) > 0);
+    TEST_ASSERT(of_match_serialize(%(v_name)s, &match1, &octets) == 
+        OF_ERROR_NONE);
+    TEST_ASSERT(of_match_deserialize(%(v_name)s, &match2, &octets) == 
+        OF_ERROR_NONE);
+    TEST_ASSERT(memcmp(&match1, &match2, sizeof(match1)) == 0);
+    FREE(octets.data);
+""" % dict(v_name=loxi_utils.version_to_name(version), version=version))
+
+    out.write("""
+    return TEST_PASS;
+}
+""")
+
+    out.write("""
+int run_match_tests(void)
+{
+    RUN_TEST(match_1);
+    RUN_TEST(match_2);
+    RUN_TEST(match_3);
+    RUN_TEST(match_utils);
+
+    return TEST_PASS;
+}
+""")
+
+def gen_msg_test(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Message-scalar tests for all versions
+ */
+
+#include <locitest/test_common.h>
+""")
+    for version in of_g.of_version_range:
+        for cls in of_g.ordered_messages:
+            if not (cls, version) in of_g.base_length:
+                continue
+            bytes = of_g.base_length[(cls, version)]
+            out.write("""
+static int
+test_%(cls)s_create_%(v_name)s(void)
+{
+    %(cls)s_t *obj;
+    uint8_t *msg_buf;
+    int value;
+    int len;
+
+    obj = %(cls)s_new(%(v_name)s);
+    TEST_ASSERT(obj != NULL);
+    TEST_ASSERT(obj->version == %(v_name)s);
+    TEST_ASSERT(obj->length == %(bytes)d);
+    TEST_ASSERT(obj->parent == NULL);
+    TEST_ASSERT(obj->object_id == %(enum)s);
+
+    /* Set up incrementing values for scalar members */
+    value = %(cls)s_%(v_name)s_populate_scalars(obj, 1);
+    TEST_ASSERT(value != 0);
+
+    /* Grab the underlying buffer from the message */
+    len = obj->length;
+    of_object_wire_buffer_steal((of_object_t *)obj, &msg_buf);
+    TEST_ASSERT(msg_buf != NULL);
+    %(cls)s_delete(obj);
+    /* TODO:  */
+    TEST_ASSERT(of_message_to_object_id(msg_buf, len) == %(enum)s);
+    obj = %(cls)s_new_from_message(OF_BUFFER_TO_MESSAGE(msg_buf));
+
+    TEST_ASSERT(obj != NULL);
+
+    /* @fixme Set up all message objects (recursively?) */
+
+    value = %(cls)s_%(v_name)s_check_scalars(obj, 1);
+    TEST_ASSERT(value != 0);
+
+    %(cls)s_delete(obj);
+
+    return TEST_PASS;
+}
+""" % dict(cls=cls, version=version, enum=loxi_utils.enum_name(cls),
+           v_name=loxi_utils.version_to_name(version), bytes=bytes))
+
+    out.write("""
+int
+run_message_tests(void)
+{
+""")
+    for version in of_g.of_version_range:
+        for cls in of_g.ordered_messages:
+            if not (cls, version) in of_g.base_length:
+                continue
+            test_name = "%s_create_%s" % (cls, loxi_utils.version_to_name(version))
+            out.write("    RUN_TEST(%s);\n" % test_name)
+
+    out.write("\n    return TEST_PASS;\n}\n");
+        
+
+def gen_list_setup_check(out, cls, version):
+    """
+    Generate functions that populate and check a list with two
+    of each type of subclass it supports
+    """
+    out.write("""
+/**
+ * Populate a list of type %(cls)s with two of each type of subclass
+ * @param list Pointer to the list to be populated
+ * @param value The seed value to use in populating the list
+ * @returns The value after increments for this object's values
+ */
+int
+%(cls)s_%(v_name)s_populate(
+    %(cls)s_t *list, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    base_type = loxi_utils.list_to_entry_type(cls)
+    out.write("""
+    %(base_type)s_t elt;
+    int cur_len = 0;
+""" % dict(cls=cls, base_type=base_type))
+    
+    sub_classes =  type_maps.sub_class_map(base_type, version)
+    v_name = loxi_utils.version_to_name(version)
+
+    if len(sub_classes) == 0:
+        out.write("    /* No subclasses for %s */\n"% base_type)
+        out.write("    %s_t *elt_p;\n" % base_type)
+        out.write("\n    elt_p = &elt;\n")
+    else:
+        out.write("    /* Declare pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s_t *%s;\n" % (subcls, instance))
+        out.write("\n    /* Instantiate pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s = &elt.%s;\n" % (instance, instance))
+
+#     if type_maps.class_is_virtual(base_type):
+#         out.write("""\
+#     TEST_OK(%(base_type)s_header_init(
+#         (%(base_type)s_header_t *)&elt, %(v_name)s, -1, 1));
+# """ % dict(base_type=base_type, v_name=loxi_utils.version_to_name(version)))
+#     else:
+#         out.write("""\
+#     TEST_OK(%(base_type)s_init(&elt, %(v_name)s, -1, 1));
+# """ % dict(base_type=base_type, v_name=loxi_utils.version_to_name(version)))
+
+    if len(sub_classes) == 0: # No inheritance case
+        inst_len = loxi_utils.base_type_to_length(base_type, version)
+        setup_instance(out, cls, base_type, "elt_p", v_name, inst_len, version)
+    else:
+        for instance, subcls in sub_classes:
+            inst_len = of_g.base_length[(subcls, version)]
+            setup_instance(out, cls, subcls, instance, v_name, 
+                           inst_len, version)
+    out.write("""
+    return value;
+}
+""")
+    out.write("""
+/**
+ * Check a list of type %(cls)s generated by 
+ * %(cls)s_%(v_name)s_populate
+ * @param list Pointer to the list that was populated
+ * @param value Starting value for checking
+ * @returns The value after increments for this object's values
+ */
+int
+%(cls)s_%(v_name)s_check(
+    %(cls)s_t *list, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    base_type = loxi_utils.list_to_entry_type(cls)
+    out.write("""
+    %(base_type)s_t elt;
+    int count = 0;
+    int rv;
+""" % dict(cls=cls, base_type=base_type))
+    
+
+    sub_classes =  type_maps.sub_class_map(base_type, version)
+    v_name = loxi_utils.version_to_name(version)
+
+    if len(sub_classes) == 0:
+        entry_count = 2
+        out.write("    /* No subclasses for %s */\n"% base_type)
+        out.write("    %s_t *elt_p;\n" % base_type)
+        out.write("\n    elt_p = &elt;\n")
+    else:
+        entry_count = 2 * len(sub_classes) # Two of each type appended
+        out.write("    /* Declare pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s_t *%s;\n" % (subcls, instance))
+        out.write("\n    /* Instantiate pointers for each subclass */\n")
+        for instance, subcls in sub_classes:
+            out.write("    %s = &elt.%s;\n" % (instance, instance))
+
+    out.write("    TEST_OK(%(cls)s_first(list, &elt));\n" % dict(cls=cls))
+    if len(sub_classes) == 0: # No inheritance case
+        if loxi_utils.class_is_var_len(base_type, version):
+            inst_len = -1
+        else:
+            inst_len = loxi_utils.base_type_to_length(base_type, version)
+        check_instance(out, cls, base_type, "elt_p", v_name, inst_len, 
+                       version, True)
+    else:
+        count = 0
+        for instance, subcls in sub_classes:
+            count += 1
+            if loxi_utils.class_is_var_len(subcls, version):
+                inst_len = -1
+            else:
+                inst_len = of_g.base_length[(subcls, version)]
+            check_instance(out, cls, subcls, instance, v_name, inst_len, 
+                           version, count==len(sub_classes))
+    out.write("""
+""" % dict(base_type=base_type))
+
+    out.write("""
+    /* Do an iterate to test the iterator */
+    %(u_cls)s_ITER(list, &elt, rv) {
+        count += 1;
+    }
+
+    TEST_ASSERT(rv == OF_ERROR_RANGE);
+    TEST_ASSERT(count == %(entry_count)d);
+
+    /* We shoehorn a test of the dup functions here */
+    {
+        %(cls)s_t *dup;
+
+        TEST_ASSERT((dup = %(cls)s_dup(list)) != NULL);
+        TEST_ASSERT(dup->length == list->length);
+        TEST_ASSERT(dup->object_id == list->object_id);
+        TEST_ASSERT(dup->version == list->version);
+        TEST_ASSERT(MEMCMP(OF_OBJECT_BUFFER_INDEX(dup, 0),
+            OF_OBJECT_BUFFER_INDEX(list, 0), list->length) == 0);
+        of_object_delete((of_object_t *)dup);
+
+        /* And now for the generic dup function */
+        TEST_ASSERT((dup = (%(cls)s_t *)
+            of_object_dup(list)) != NULL);
+        TEST_ASSERT(dup->length == list->length);
+        TEST_ASSERT(dup->object_id == list->object_id);
+        TEST_ASSERT(dup->version == list->version);
+        TEST_ASSERT(MEMCMP(OF_OBJECT_BUFFER_INDEX(dup, 0),
+            OF_OBJECT_BUFFER_INDEX(list, 0), list->length) == 0);
+        of_object_delete((of_object_t *)dup);
+    }
+
+    return value;
+}
+""" % dict(cls=cls, u_cls=cls.upper(), entry_count=entry_count))
+
+
+def gen_class_setup_check(out, cls, version):
+    out.write("""
+/**
+ * Populate all members of an object of type %(cls)s
+ * with incrementing values
+ * @param obj Pointer to an object to populate
+ * @param value The seed value to use in populating the object
+ * @returns The value after increments for this object's values
+ */
+
+int
+%(cls)s_%(v_name)s_populate(
+    %(cls)s_t *obj, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    members, member_types = loxi_utils.all_member_types_get(cls, version)
+    for m_type in member_types:
+        if loxi_utils.type_is_scalar(m_type) or m_type in ["of_match_t", "of_octets_t"]:
+            out.write("    %s %s;\n" % (m_type, var_name_map(m_type)))
+        else:
+            out.write("    %s *%s;\n" % (m_type, var_name_map(m_type)))
+    out.write("""
+    /* Run thru accessors after new to ensure okay */
+""")
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if loxi_utils.skip_member_name(m_name):
+            continue
+        if loxi_utils.type_is_scalar(m_type) or m_type in ["of_match_t", "of_octets_t"]:
+            out.write("""\
+    %(cls)s_%(m_name)s_get(obj, &%(var_name)s);
+""" % dict(var_name=var_name_map(m_type), cls=cls, m_name=m_name))
+        else:
+            sub_cls = m_type[:-2] # Trim _t
+            out.write("""\
+    {
+        %(sub_cls)s_t sub_cls;
+
+        /* Test bind */
+        %(cls)s_%(m_name)s_bind(obj, &sub_cls);
+    }
+""" % dict(var_name=var_name_map(m_type), cls=cls, 
+           m_name=m_name, sub_cls=sub_cls,
+           v_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+    value = %(cls)s_%(v_name)s_populate_scalars(
+        obj, value);
+    TEST_ASSERT(value != 0);
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if loxi_utils.type_is_scalar(m_type): # Handled by call to scalar setup
+            continue
+        if loxi_utils.skip_member_name(m_name):
+            continue
+        if m_type == "of_match_t":
+            out.write("""\
+    value = of_match_populate(&%(var_name)s, %(v_name)s, value);
+    TEST_ASSERT(value != 0);
+    %(cls)s_%(m_name)s_set(
+        obj, &%(var_name)s);
+""" % dict(cls=cls, var_name=var_name_map(m_type), 
+           m_name=m_name, v_name=loxi_utils.version_to_name(version)))
+        elif m_type == "of_octets_t":
+            out.write("""\
+    value = of_octets_populate(&%(var_name)s, value);
+    TEST_ASSERT(value != 0);
+    %(cls)s_%(m_name)s_set(
+        obj, &%(var_name)s);
+    if (octets.bytes) {
+        FREE(octets.data);
+    }
+""" % dict(var_name=var_name_map(m_type), cls=cls, m_name=m_name))
+        else:
+            sub_cls = m_type[:-2] # Trim _t
+            out.write("""
+    %(var_name)s = %(sub_cls)s_new(%(v_name)s);
+    TEST_ASSERT(%(var_name)s != NULL);
+    value = %(sub_cls)s_%(v_name)s_populate(
+        %(var_name)s, value);
+    TEST_ASSERT(value != 0);
+    %(cls)s_%(m_name)s_set(
+        obj, %(var_name)s);
+    %(sub_cls)s_delete(%(var_name)s);
+""" % dict(cls=cls, sub_cls=sub_cls, m_name=m_name, m_type=m_type,
+           var_name=var_name_map(m_type),
+           v_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+    return value;
+}
+""")
+
+    out.write("""
+/**
+ * Check all members of an object of type %(cls)s
+ * populated by the above function
+ * @param obj Pointer to an object to check
+ * @param value Starting value for checking
+ * @returns The value after increments for this object's values
+ */
+
+int
+%(cls)s_%(v_name)s_check(
+    %(cls)s_t *obj, int value)
+{
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+    members, member_types = loxi_utils.all_member_types_get(cls, version)
+    for m_type in member_types:
+        if loxi_utils.type_is_scalar(m_type): # Handled by call to scalar setup
+            continue
+        if loxi_utils.type_is_of_object(m_type):
+            continue
+        out.write("    %s %s;\n" % (m_type, var_name_map(m_type)))
+    out.write("""
+    value = %(cls)s_%(v_name)s_check_scalars(
+        obj, value);
+    TEST_ASSERT(value != 0);
+""" % dict(cls=cls, v_name=loxi_utils.version_to_name(version)))
+
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if loxi_utils.type_is_scalar(m_type): # Handled by call to scalar setup
+            continue
+        if loxi_utils.skip_member_name(m_name):
+            continue
+        if m_type == "of_match_t":
+            out.write("""\
+    %(cls)s_%(m_name)s_get(obj, &%(var_name)s);
+    value = of_match_check(&%(var_name)s, %(v_name)s, value);
+""" % dict(cls=cls, var_name=var_name_map(m_type), m_name=m_name,
+           v_name=loxi_utils.version_to_name(version)))
+        elif m_type == "of_octets_t":
+            out.write("""\
+    %(cls)s_%(m_name)s_get(obj, &%(var_name)s);
+    value = of_octets_check(&%(var_name)s, value);
+""" % dict(cls=cls, var_name=var_name_map(m_type), m_name=m_name,
+           v_name=loxi_utils.version_to_name(version)))
+        else:
+            sub_cls = m_type[:-2] # Trim _t
+            out.write("""
+    { /* Use get/delete to access on check */
+        %(m_type)s *%(m_name)s_ptr;
+
+        %(m_name)s_ptr = %(cls)s_%(m_name)s_get(obj);
+        TEST_ASSERT(%(m_name)s_ptr != NULL);
+        value = %(sub_cls)s_%(v_name)s_check(
+            %(m_name)s_ptr, value);
+        TEST_ASSERT(value != 0);
+        %(sub_cls)s_delete(%(m_name)s_ptr);
+    }
+""" % dict(cls=cls, sub_cls=sub_cls, m_name=m_name, m_type=m_type,
+           var_name=var_name_map(m_type),
+           v_name=loxi_utils.version_to_name(version)))
+
+    out.write("""
+    /* We shoehorn a test of the dup functions here */
+    {
+        %(cls)s_t *dup;
+
+        TEST_ASSERT((dup = %(cls)s_dup(obj)) != NULL);
+        TEST_ASSERT(dup->length == obj->length);
+        TEST_ASSERT(dup->object_id == obj->object_id);
+        TEST_ASSERT(dup->version == obj->version);
+        TEST_ASSERT(MEMCMP(OF_OBJECT_BUFFER_INDEX(dup, 0),
+            OF_OBJECT_BUFFER_INDEX(obj, 0), obj->length) == 0);
+        of_object_delete((of_object_t *)dup);
+
+        /* And now for the generic dup function */
+        TEST_ASSERT((dup = (%(cls)s_t *)
+            of_object_dup(obj)) != NULL);
+        TEST_ASSERT(dup->length == obj->length);
+        TEST_ASSERT(dup->object_id == obj->object_id);
+        TEST_ASSERT(dup->version == obj->version);
+        TEST_ASSERT(MEMCMP(OF_OBJECT_BUFFER_INDEX(dup, 0),
+            OF_OBJECT_BUFFER_INDEX(obj, 0), obj->length) == 0);
+        of_object_delete((of_object_t *)dup);
+    }
+
+    return value;
+}
+""" % dict(cls=cls))
+
+def unified_accessor_test_case(out, cls, version):
+    """
+    Generate one test case for the given version and class
+    """
+
+    members, member_types = scalar_member_types_get(cls, version)
+    length = of_g.base_length[(cls, version)]
+    v_name = loxi_utils.version_to_name(version)
+
+    out.write("""
+static int
+test_%(cls)s_%(v_name)s(void)
+{
+    %(cls)s_t *obj;
+    obj = %(cls)s_new(%(v_name)s);
+    TEST_ASSERT(obj != NULL);
+    TEST_ASSERT(obj->version == %(v_name)s);
+    TEST_ASSERT(obj->length == %(length)d);
+    TEST_ASSERT(obj->parent == NULL);
+    TEST_ASSERT(obj->object_id == %(u_cls)s);
+""" % dict(cls=cls, u_cls=cls.upper(), 
+           v_name=v_name, length=length, version=version))
+    if (not type_maps.class_is_virtual(cls)) or loxi_utils.class_is_list(cls):
+        out.write("""
+    if (obj->wire_length_get != NULL) {
+        int length;
+
+        obj->wire_length_get((of_object_t *)obj, &length);
+        TEST_ASSERT(length == %(length)d);
+    }
+    if (obj->wire_type_get != NULL) {
+        of_object_id_t obj_id;
+
+        obj->wire_type_get((of_object_t *)obj, &obj_id);
+        TEST_ASSERT(obj_id == %(u_cls)s);
+    }
+
+    /* Set up incrementing values for members */
+    TEST_ASSERT(%(cls)s_%(v_name)s_populate(
+        obj, 1) != 0);
+
+    /* Check values just set */
+    TEST_ASSERT(%(cls)s_%(v_name)s_check(
+        obj, 1) != 0);
+""" % dict(cls=cls, u_cls=cls.upper(), 
+           v_name=v_name, length=length, version=version))
+
+    out.write("""
+    %(cls)s_delete(obj);
+
+    /* To do: Check memory */
+    return TEST_PASS;
+}
+""" % dict(cls=cls))
+
+
+def gen_unified_accessor_funs(out):
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            elif loxi_utils.class_is_list(cls):
+                gen_list_setup_check(out, cls, version)
+            else:
+                gen_class_setup_check(out, cls, version)
+
+def gen_unified_accessor_tests(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Unified simple class instantiation tests for all versions
+ */
+
+#include <locitest/test_common.h>
+""")
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            unified_accessor_test_case(out, cls, version)
+
+    out.write("""
+int
+run_unified_accessor_tests(void)
+{
+""")
+    for version in of_g.of_version_range:
+        v_name = loxi_utils.version_to_name(version)
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            test_name = "%s_%s" % (cls, v_name)
+            out.write("    RUN_TEST(%s);\n" % test_name)
+
+    out.write("    return TEST_PASS;\n}\n");
+
+
+
+################################################################
+#
+# Object duplication functions
+#
+# These exercise the accessors to create duplicate objects.
+# They are used in the LOCI test shim which sits in an OF
+# protocol stream.
+#
+# TODO
+# Resolve version stuff
+# Complete list dup
+
+def gen_dup_list(out, cls, version):
+    ver_name = loxi_utils.version_to_name(version)
+    elt_type = loxi_utils.list_to_entry_type(cls)
+    out.write("""
+/**
+ * Duplicate a list of type %(cls)s 
+ * using accessor functions
+ * @param src Pointer to object to be duplicated
+ * @returns A new object of type %(cls)s.
+ *
+ * The caller is responsible for deleting the returned value
+ */
+%(cls)s_t *
+%(cls)s_%(ver_name)s_dup(
+    %(cls)s_t *src)
+{
+    %(elt_type)s_t src_elt;
+    %(elt_type)s_t *dst_elt;
+    int rv;
+    %(cls)s_t *dst;
+
+    if ((dst = %(cls)s_new(src->version)) == NULL) {
+        return NULL;
+    }
+""" % dict(elt_type=elt_type, cls=cls, ver_name=ver_name))
+
+    out.write("""
+    %(u_cls)s_ITER(src, &src_elt, rv) {
+        if ((dst_elt = %(elt_type)s_%(ver_name)s_dup(&src_elt)) == NULL) {
+            of_object_delete((of_object_t *)dst);
+            return NULL;
+        }
+        _TRY_FREE(%(cls)s_append(dst, dst_elt),
+            dst, NULL);
+        of_object_delete((of_object_t *)dst_elt);
+    }
+
+    return dst;
+}
+""" % dict(u_cls=cls.upper(), elt_type=elt_type, cls=cls, ver_name=ver_name))
+
+
+def gen_dup_inheritance(out, cls, version):
+    ver_name = loxi_utils.version_to_name(version)
+    out.write("""
+/**
+ * Duplicate a super class object of type %(cls)s 
+ * @param src Pointer to object to be duplicated
+ * @returns A new object of type %(cls)s.
+ *
+ * The caller is responsible for deleting the returned value
+ */
+%(cls)s_t *
+%(cls)s_%(ver_name)s_dup(
+    %(cls)s_t *src)
+{
+""" % dict(cls=cls, ver_name=ver_name))
+
+    # For each subclass, check if this is an instance of that subclass
+    version_classes = type_maps.inheritance_data[cls][version]
+    for sub_cls in version_classes:
+        sub_enum = (cls + "_" + sub_cls).upper()
+        out.write("""
+    if (src->header.object_id == %(sub_enum)s) {
+        return (%(cls)s_t *)%(cls)s_%(sub_cls)s_%(ver_name)s_dup(
+            &src->%(sub_cls)s);
+    }
+""" % dict(sub_cls=sub_cls, ver_name=ver_name, sub_enum=sub_enum, cls=cls))
+
+    out.write("""
+    return NULL;
+}
+""")
+
+
+def gen_dup_cls(out, cls, version):
+    """
+    Generate duplication routine for class cls
+    """
+    ver_name = loxi_utils.version_to_name(version)
+
+    out.write("""
+/**
+ * Duplicate an object of type %(cls)s 
+ * using accessor functions
+ * @param src Pointer to object to be duplicated
+ * @returns A new object of type %(cls)s.
+ *
+ * The caller is responsible for deleting the returned value
+ */
+%(cls)s_t *
+%(cls)s_%(ver_name)s_dup(
+    %(cls)s_t *src)
+{
+    %(cls)s_t *dst;
+""" % dict(cls=cls, ver_name=ver_name))
+
+    # Get members and types for the class
+    members, member_types = loxi_utils.all_member_types_get(cls, version)
+
+    # Add declarations for each member type
+    for m_type in member_types:
+        if loxi_utils.type_is_scalar(m_type) or m_type in ["of_match_t", "of_octets_t"]:
+            # Declare instance of these
+            out.write("    %s %s;\n" % (m_type, var_name_map(m_type)))
+        else:
+            out.write("""
+    %(m_type)s src_%(v_name)s;
+    %(m_type)s *dst_%(v_name)s;
+"""  % dict(m_type=m_type, v_name=var_name_map(m_type)))
+
+    out.write("""
+    if ((dst = %(cls)s_new(src->version)) == NULL) {
+        return NULL;
+    }
+""" % dict(cls=cls))
+
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if loxi_utils.skip_member_name(m_name):
+            continue
+        if loxi_utils.type_is_scalar(m_type): # Handled by call to scalar setup
+            out.write("""
+    %(cls)s_%(m_name)s_get(src, &%(v_name)s);
+    %(cls)s_%(m_name)s_set(dst, %(v_name)s);
+""" % dict(cls=cls, m_name=m_name, v_name=var_name_map(m_type)))
+        elif m_type in ["of_match_t", "of_octets_t"]:
+            out.write("""
+    %(cls)s_%(m_name)s_get(src, &%(v_name)s);
+    %(cls)s_%(m_name)s_set(dst, &%(v_name)s);
+""" % dict(cls=cls, m_name=m_name, v_name=var_name_map(m_type)))
+        else:
+            sub_cls = m_type[:-2] # Trim _t
+            out.write("""
+    %(cls)s_%(m_name)s_bind(
+        src, &src_%(v_name)s);
+    dst_%(v_name)s = %(sub_cls)s_%(ver_name)s_dup(&src_%(v_name)s);
+    if (dst_%(v_name)s == NULL) {
+        %(cls)s_delete(dst);
+        return NULL;
+    }
+    %(cls)s_%(m_name)s_set(dst, dst_%(v_name)s);
+    %(sub_cls)s_delete(dst_%(v_name)s);
+""" % dict(sub_cls=sub_cls, cls=cls, m_name=m_name, 
+           v_name=var_name_map(m_type), ver_name=ver_name))
+
+    out.write("""
+    return dst;
+}
+""")
+
+def gen_version_dup(out=sys.stdout):
+    """
+    Generate duplication routines for each object type
+    """
+    out.write("""
+/* Special try macro for duplicating */
+#define _TRY_FREE(op, obj, rv) do { \\
+        int _rv;                                                             \\
+        if ((_rv = (op)) < 0) {                                              \\
+            LOCI_LOG_ERROR("ERROR %d at %s:%d\\n", _rv, __FILE__, __LINE__);    \\
+            of_object_delete((of_object_t *)(obj));                          \\
+            return (rv);                                                     \\
+        }                                                                    \\
+    } while (0)
+""")
+
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                gen_dup_inheritance(out, cls, version)
+            elif loxi_utils.class_is_list(cls):
+                gen_dup_list(out, cls, version)
+            else:
+                gen_dup_cls(out, cls, version)
+
+def gen_dup(out=sys.stdout):
+    """
+    Generate non-version specific duplication routines for each object type
+    """
+
+    for cls in of_g.standard_class_order:
+        out.write("""
+%(cls)s_t *
+%(cls)s_dup(
+    %(cls)s_t *src)
+{
+""" % dict(cls=cls))
+        for version in of_g.of_version_range:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            hdr = "header." if cls in type_maps.inheritance_map else ""
+
+            ver_name = loxi_utils.version_to_name(version)
+            out.write("""
+    if (src->%(hdr)sversion == %(ver_name)s) {
+        return %(cls)s_%(ver_name)s_dup(src);
+    }
+""" % dict(cls=cls, ver_name=ver_name, hdr=hdr))
+
+        out.write("""
+    /* Class not supported in given version */
+    return NULL;
+}
+""")
+
+def dup_c_gen(out, name):
+    """
+    Generate the C file for duplication functions
+    """
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""\
+/*
+ * Duplication functions for all OF objects
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * These are test functions for exercising accessors.  You can call
+ * of_object_dup for an efficient duplication.
+ */
+
+#define DISABLE_WARN_UNUSED_RESULT
+#include "loci_log.h"
+#include <locitest/of_dup.h>
+
+""")
+
+    gen_version_dup(out)
+    gen_dup(out)
+
+
+def dup_h_gen(out, name):
+    """
+    Generate the header file for duplication functions
+    """
+
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/*
+ * Duplication function header file
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ */
+
+#if !defined(_OF_DUP_H_)
+#define _OF_DUP_H_
+
+#include <loci/loci.h>
+""")
+
+    for cls in of_g.standard_class_order:
+        out.write("""
+extern %(cls)s_t *
+    %(cls)s_dup(
+        %(cls)s_t *src);
+""" % dict(cls=cls))
+
+    for version in of_g.of_version_range:
+        for cls in of_g.standard_class_order:
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            ver_name = loxi_utils.version_to_name(version)
+            out.write("""
+extern %(cls)s_t *
+    %(cls)s_%(ver_name)s_dup(
+        %(cls)s_t *src);
+""" % dict(cls=cls, ver_name=ver_name))
+
+    out.write("\n#endif /* _OF_DUP_H_ */\n")
+
+def gen_log_test(out):
+    """
+    Generate test for obj log calls
+
+    Define a trivial handler for object logging; call all obj log fns
+    """
+    out.write("""
+
+/**
+ * Test object dump functions
+ */
+
+int
+test_dump_objs(void)
+{
+    of_object_t *obj;
+
+    FILE *out = fopen("/dev/null", "w");
+
+    /* Call each obj dump function */
+""")
+    for version in of_g.of_version_range:
+        for j, cls in enumerate(of_g.all_class_order):
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""
+    obj = (of_object_t *)%(cls)s_new(%(version)s);
+    of_object_dump((loci_writer_f)fprintf, out, obj);
+    of_object_delete(obj);
+""" % dict(cls=cls, version=of_g.of_version_wire2name[version]))
+    
+    out.write("""
+    fclose(out);
+    return TEST_PASS;
+}
+""")
+
+def gen_ident_tests(out):
+    """
+    Generate tests for identifiers
+
+    For all idents, instantiate, test version supported macros
+    For flags, set it, test it, clear it, test it.
+    """
+    out.write("""
+/**
+ * Test cases for all flag accessor macros
+ * These only test self consistency (and that they compile)
+ */
+int
+test_ident_macros(void)
+{
+    int value __attribute__((unused));
+    uint32_t flags;
+
+""")
+
+    for ident, info in of_g.identifiers.items():
+        if not identifiers.defined_versions_agree(of_g.identifiers,
+                                                  of_g.target_version_list,
+                                                  ident):
+            # @fixme
+            continue
+        out.write("    value = %s;\n" % ident)
+        for version in of_g.target_version_list:
+            if version in info["values_by_version"].keys():
+                out.write("    TEST_ASSERT(%s_SUPPORTED(%s));\n" %
+                          (ident, of_g.of_version_wire2name[version]))
+            else:
+                out.write("    TEST_ASSERT(!%s_SUPPORTED(%s));\n" %
+                          (ident, of_g.of_version_wire2name[version]))
+        if flags.ident_is_flag(ident):
+            # Grab first supported version
+            for version in info["values_by_version"]:
+                break
+            out.write("""
+    flags = 0;
+    %(ident)s_SET(flags, %(ver_name)s);
+    TEST_ASSERT(flags == %(ident)s_BY_VERSION(%(ver_name)s));
+    TEST_ASSERT(%(ident)s_TEST(flags, %(ver_name)s));
+    %(ident)s_CLEAR(flags, %(ver_name)s);
+    TEST_ASSERT(flags == 0);
+    TEST_ASSERT(!%(ident)s_TEST(flags, %(ver_name)s));
+""" % dict(ident=ident, ver_name=of_g.of_version_wire2name[version]))
+
+    out.write("""
+    return TEST_PASS;
+}
+""")
+
diff --git a/c_gen/c_type_maps.py b/c_gen/c_type_maps.py
new file mode 100644
index 0000000..924243a
--- /dev/null
+++ b/c_gen/c_type_maps.py
@@ -0,0 +1,1107 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief C code generation for LOXI type related maps
+#
+
+import of_g
+import sys
+from generic_utils import *
+import loxi_front_end.oxm as oxm
+import loxi_front_end.type_maps as type_maps
+
+
+# Some number larger than small type values, but less then
+# reserved values like 0xffff
+max_type_value = 1000
+
+def gen_object_id_to_type(out):
+    out.write("""
+/**
+ * Map from object ID to primary wire type
+ *
+ * For messages, this is the header type; in particular for stats, this is
+ * the common stats request/response type.  For per-stats types, use the
+ * stats type map.  For things like actions, instructions or queue-props,
+ * this gives the "sub type".
+ */
+""")
+    for version in of_g.of_version_range:
+        out.write("static int\nof_object_to_type_map_v%d[OF_OBJECT_COUNT] = {\n"
+                  %version)
+        out.write("    -1, /* of_object, not a valid specific type */\n")
+        for j, cls in enumerate(of_g.all_class_order):
+            comma = ""
+            if j < len(of_g.all_class_order) - 1: # Avoid ultimate comma
+                comma = ","
+
+            if cls in type_maps.stats_reply_list:
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_stats_reply", version)],
+                           comma, cls))
+            elif cls in type_maps.stats_request_list:
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_stats_request", version)],
+                           comma, cls))
+            elif cls in type_maps.flow_mod_list:
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_flow_mod", version)],
+                           comma, cls))
+            elif (cls, version) in type_maps.type_val:
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[(cls, version)], comma, cls))
+            elif type_maps.message_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_experimenter", version)],
+                           comma, cls))
+            elif type_maps.action_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_action_experimenter",
+                                               version)],
+                           comma, cls))
+            elif type_maps.action_id_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_action_id_experimenter",
+                                               version)],
+                           comma, cls))
+            elif type_maps.instruction_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_instruction_experimenter",
+                                               version)],
+                           comma, cls))
+            elif type_maps.queue_prop_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                          (type_maps.type_val[("of_queue_prop_experimenter",
+                                               version)],
+                           comma, cls))
+            elif type_maps.table_feature_prop_is_extension(cls, version):
+                out.write("    %d%s /* %s */\n" % 
+                    (type_maps.type_val[("of_table_feature_prop_experimenter",
+                                         version)],
+                     comma, cls))
+            else:
+                out.write("    -1%s /* %s (invalid) */\n" % (comma, cls))
+        out.write("};\n\n")
+
+    out.write("""
+/**
+ * Unified map, indexed by wire version which is 1-based.
+ */
+int *of_object_to_type_map[OF_VERSION_ARRAY_MAX] = {
+    NULL,
+""")
+    for version in of_g.of_version_range:
+        out.write("    of_object_to_type_map_v%d,\n" % version)
+    out.write("""
+};
+""")
+
+def gen_object_id_to_extension_data(out):
+    out.write("""
+/**
+ * Extension data.
+ * @fixme There must be a better way to represent this data
+ */
+""")
+    for version in of_g.of_version_range:
+        out.write("""
+static of_experimenter_data_t
+of_object_to_extension_data_v%d[OF_OBJECT_COUNT] = {
+""" % version)
+        out.write("    {0, 0, 0}, /* of_object, not a valid specific type */\n")
+        for j, cls in enumerate(of_g.all_class_order):
+            comma = ""
+            if j < len(of_g.all_class_order) - 1: # Avoid ultimate comma
+                comma = ","
+
+            if type_maps.class_is_extension(cls, version):
+                exp_name = type_maps.extension_to_experimenter_macro_name(cls)
+                subtype = type_maps.extension_to_subtype(cls, version)
+                out.write("    {1, %s, %d}%s /* %s */\n" % 
+                          (exp_name, subtype, comma, cls))
+            else:
+                out.write("    {0, 0, 0}%s /* %s (non-extension) */\n" %
+                          (comma, cls))
+        out.write("};\n\n")
+
+    out.write("""
+/**
+ * Unified map, indexed by wire version which is 1-based.
+ */
+of_experimenter_data_t *of_object_to_extension_data[OF_VERSION_ARRAY_MAX] = {
+    NULL,
+""")
+    for version in of_g.of_version_range:
+        out.write("    of_object_to_extension_data_v%d,\n" % version)
+    out.write("""
+};
+""")
+
+def gen_type_to_object_id(out, type_str, prefix, template,
+                          value_array, max_val):
+    """
+    Generate C maps from various message class groups to object ids
+
+    For each version, create an array mapping the type info to the
+    object ID.  Then define an array containing those pointers.
+    """
+
+    # Create unified arrays and get length
+    arr_len = type_maps.type_array_len(value_array, max_val)
+    all_ars = []
+    for version, val_dict in value_array.items(): # Per version dict
+        ar = type_maps.dict_to_array(val_dict, max_val, type_maps.invalid_type)
+        all_ars.append(ar)
+
+    len_name = "%s_ITEM_COUNT" % prefix
+
+    for i, ar in enumerate(all_ars):
+        version = i + 1
+        out.write("static of_object_id_t\nof_%s_v%d[%s] = {\n" %
+                  (type_str, version, len_name))
+        for i in range(arr_len):
+            comma = ""
+            if i < arr_len - 1: # Avoid ultimate comma
+                comma = ","
+
+            # Per-version length check
+            if i < len(ar):
+                v = ar[i]
+            else:
+                v = type_maps.invalid_type
+
+            if v == type_maps.invalid_type:
+                out.write("    %-30s /* %d (Invalid) */\n" %
+                          ("OF_OBJECT_INVALID" + comma, i))
+            else:
+                name = (template % v.upper()) + comma
+                out.write("    %-30s /* %d */\n" % (name, i))
+        out.write("};\n")
+
+    out.write("""
+/**
+ * Maps from %(c_name)s wire type values to LOCI object ids
+ *
+ * Indexed by wire version which is 1-based.
+ */
+
+of_object_id_t *of_%(name)s[OF_VERSION_ARRAY_MAX] = {
+    NULL,
+""" % dict(name=type_str, c_name=prefix.lower()))
+    for version in of_g.of_version_range:
+        out.write("    of_%(name)s_v%(version)d,\n" % dict(name=type_str,
+                                                           version=version))
+    out.write("""
+};
+
+""" % dict(name=type_str, u_name=type_str.upper(), 
+           max_val=max_val, c_name=prefix.lower()))
+
+def gen_type_maps(out):
+    """
+    Generate various type maps
+    @param out The file handle to write to
+    """
+
+    out.write("#include <loci/loci.h>\n\n")
+
+    # Generate maps from wire type values to object IDs
+    gen_type_to_object_id(out, "action_type_to_id", "OF_ACTION",
+                          "OF_ACTION_%s", type_maps.action_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "action_id_type_to_id", "OF_ACTION_ID",
+                          "OF_ACTION_ID_%s", type_maps.action_id_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "instruction_type_to_id", "OF_INSTRUCTION",
+                          "OF_INSTRUCTION_%s", type_maps.instruction_types, 
+                          max_type_value)
+    gen_type_to_object_id(out, "queue_prop_type_to_id", "OF_QUEUE_PROP",
+                          "OF_QUEUE_PROP_%s", type_maps.queue_prop_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "table_feature_prop_type_to_id",
+                          "OF_TABLE_FEATURE_PROP",
+                          "OF_TABLE_FEATURE_PROP_%s",
+                          type_maps.table_feature_prop_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "meter_band_type_to_id", "OF_METER_BAND",
+                          "OF_METER_BAND_%s", type_maps.meter_band_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "hello_elem_type_to_id", "OF_HELLO_ELEM",
+                          "OF_HELLO_ELEM_%s", type_maps.hello_elem_types,
+                          max_type_value)
+
+    # FIXME:  Multipart re-organization
+    gen_type_to_object_id(out, "stats_request_type_to_id", "OF_STATS_REQUEST",
+                          "OF_%s_STATS_REQUEST", type_maps.stats_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "stats_reply_type_to_id", "OF_STATS_REPLY",
+                          "OF_%s_STATS_REPLY", type_maps.stats_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "flow_mod_type_to_id", "OF_FLOW_MOD",
+                          "OF_FLOW_%s", type_maps.flow_mod_types,
+                          max_type_value)
+    gen_type_to_object_id(out, "oxm_type_to_id", "OF_OXM",
+                          "OF_OXM_%s", type_maps.oxm_types, max_type_value)
+    gen_type_to_object_id(out, "message_type_to_id", "OF_MESSAGE",
+                          "OF_%s", type_maps.message_types, max_type_value)
+
+    gen_object_id_to_type(out)
+    gen_object_id_to_extension_data(out)
+    # Don't need array mapping ID to stats types right now; handled directly
+    # gen_object_id_to_stats_type(out)
+
+
+def gen_type_to_obj_map_functions(out):
+    """
+    Generate the templated static inline type map functions
+    @param out The file handle to write to
+    """
+
+    ################################################################
+    # Generate all type-to-object-ID maps in a common way
+    ################################################################
+    map_template = """
+/**
+ * %(name)s wire type to object ID array.
+ * Treat as private; use function accessor below
+ */
+
+extern of_object_id_t *of_%(name)s_type_to_id[OF_VERSION_ARRAY_MAX];
+
+#define OF_%(u_name)s_ITEM_COUNT %(ar_len)d\n
+
+/**
+ * Map an %(name)s wire value to an OF object
+ * @param %(name)s The %(name)s type wire value
+ * @param version The version associated with the check
+ * @return The %(name)s OF object type
+ * @return OF_OBJECT_INVALID if type does not map to an object
+ * 
+ */
+static inline of_object_id_t
+of_%(name)s_to_object_id(int %(name)s, of_version_t version) 
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return OF_OBJECT_INVALID;
+    }
+    if (%(name)s < 0 || %(name)s >= OF_%(u_name)s_ITEM_COUNT) {
+        return OF_OBJECT_INVALID;
+    }
+
+    return of_%(name)s_type_to_id[version][%(name)s];
+}
+"""
+    map_with_experimenter_template = """
+/**
+ * %(name)s wire type to object ID array.
+ * Treat as private; use function accessor below
+ */
+
+extern of_object_id_t *of_%(name)s_type_to_id[OF_VERSION_ARRAY_MAX];
+
+#define OF_%(u_name)s_ITEM_COUNT %(ar_len)d\n
+
+/**
+ * Map an %(name)s wire value to an OF object
+ * @param %(name)s The %(name)s type wire value
+ * @param version The version associated with the check
+ * @return The %(name)s OF object type
+ * @return OF_OBJECT_INVALID if type does not map to an object
+ * 
+ */
+static inline of_object_id_t
+of_%(name)s_to_object_id(int %(name)s, of_version_t version) 
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return OF_OBJECT_INVALID;
+    }
+    if (%(name)s == OF_EXPERIMENTER_TYPE) {
+        return OF_%(u_name)s_EXPERIMENTER;
+    }
+    if (%(name)s < 0 || %(name)s >= OF_%(u_name)s_ITEM_COUNT) {
+        return OF_OBJECT_INVALID;
+    }
+
+    return of_%(name)s_type_to_id[version][%(name)s];
+}
+"""
+
+    stats_template = """
+/**
+ * %(name)s wire type to object ID array.
+ * Treat as private; use function accessor below
+ */
+
+extern of_object_id_t *of_%(name)s_type_to_id[OF_VERSION_ARRAY_MAX];
+
+#define OF_%(u_name)s_ITEM_COUNT %(ar_len)d\n
+
+/**
+ * Map an %(name)s wire value to an OF object
+ * @param %(name)s The %(name)s type wire value
+ * @param version The version associated with the check
+ * @return The %(name)s OF object type
+ * @return OF_OBJECT_INVALID if type does not map to an object
+ * 
+ */
+static inline of_object_id_t
+of_%(name)s_to_object_id(int %(name)s, of_version_t version) 
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return OF_OBJECT_INVALID;
+    }
+    if (%(name)s == OF_EXPERIMENTER_TYPE) {
+        return OF_EXPERIMENTER_%(u_name)s;
+    }
+    if (%(name)s < 0 || %(name)s >= OF_%(u_name)s_ITEM_COUNT) {
+        return OF_OBJECT_INVALID;
+    }
+
+    return of_%(name)s_type_to_id[version][%(name)s];
+}
+"""
+    # Experimenter mapping functions
+    # Currently we support very few candidates, so we just do a
+    # list of if/elses
+    experimenter_function = """
+/**
+ * @brief Map a message known to be an exp msg to the proper object
+ *
+ * Assume that the message is a vendor/experimenter message.  Determine
+ * the specific object type for the message.
+ * @param msg An OF message object (uint8_t *)
+ * @param length The number of bytes in the message (for error checking)
+ * @param version Version of message
+ * @returns object ID of specific type if recognized or OF_EXPERIMENTER if not
+ *
+ * @todo put OF_EXPERIMENTER_<name> in loci_base.h
+ */
+
+static inline of_object_id_t
+of_message_experimenter_to_object_id(of_message_t msg, of_version_t version) {
+    uint32_t experimenter_id;
+    uint32_t subtype;
+
+    /* Extract experimenter and subtype value; look for match from type maps */
+    experimenter_id = of_message_experimenter_id_get(msg);
+    subtype = of_message_experimenter_subtype_get(msg);
+
+    /* Do a simple if/else search for the ver, experimenter and subtype */
+"""
+    first = True
+    for version, experimenter_lists in type_maps.extension_message_subtype.items():
+        for exp, subtypes in experimenter_lists.items():
+            experimenter_function += """
+    if ((experimenter_id == OF_EXPERIMENTER_ID_%(exp_name)s) && 
+            (version == %(ver_name)s)) {
+""" % dict(exp_name=exp.upper(), ver_name=of_g.wire_ver_map[version])
+            for ext_msg, subtype in subtypes.items():
+                experimenter_function += """
+        if (subtype == %(subtype)s) {
+            return %(ext_msg)s;
+        }
+""" % dict(subtype=subtype, ext_msg=ext_msg.upper())
+            experimenter_function += """
+    }
+"""
+    experimenter_function += """
+    return OF_EXPERIMENTER;
+}
+"""
+
+    # Message need different handling
+    msg_template = """
+/**
+ * %(name)s wire type to object ID array.
+ * Treat as private; use function accessor below
+ */
+
+extern of_object_id_t *of_%(name)s_type_to_id[OF_VERSION_ARRAY_MAX];
+
+#define OF_%(u_name)s_ITEM_COUNT %(ar_len)d\n
+
+/**
+ * Extract the type info from the message and determine its object type
+ * @param msg An OF message object (uint8_t *)
+ * @param length The number of bytes in the message (for error checking)
+ * @returns object ID or OF_OBJECT_INVALID if parse error
+ */
+
+static inline of_object_id_t
+of_message_to_object_id(of_message_t msg, int length) {
+    uint8_t type;
+    of_version_t ver;
+    of_object_id_t obj_id;
+    uint16_t stats_type;
+    uint8_t flow_mod_cmd;
+
+    if (length < OF_MESSAGE_MIN_LENGTH) {
+        return OF_OBJECT_INVALID;
+    }
+    type = of_message_type_get(msg);
+    ver = of_message_version_get(msg);
+    if (!OF_VERSION_OKAY(ver)) {
+        return OF_OBJECT_INVALID;
+    }
+
+    if (type >= OF_MESSAGE_ITEM_COUNT) {
+        return OF_OBJECT_INVALID;
+    }
+
+    obj_id = of_message_type_to_id[ver][type];
+
+    /* Remap to specific message if known */
+    if (obj_id == OF_EXPERIMENTER) {
+        if (length < OF_MESSAGE_EXPERIMENTER_MIN_LENGTH) {
+            return OF_OBJECT_INVALID;
+        }
+        return of_message_experimenter_to_object_id(msg, ver);
+    }
+
+    /* Remap to add/delete/strict version */
+    if (obj_id == OF_FLOW_MOD) {
+        if (length < OF_MESSAGE_MIN_FLOW_MOD_LENGTH(ver)) {
+            return OF_OBJECT_INVALID;
+        }
+        flow_mod_cmd = of_message_flow_mod_command_get(msg, ver);
+        obj_id = of_flow_mod_to_object_id(flow_mod_cmd, ver);
+    }
+
+    if ((obj_id == OF_STATS_REQUEST) || (obj_id == OF_STATS_REPLY)) {
+        if (length < OF_MESSAGE_MIN_STATS_LENGTH) {
+            return OF_OBJECT_INVALID;
+        }
+        stats_type = of_message_stats_type_get(msg);
+        if (obj_id == OF_STATS_REQUEST) {
+            obj_id = of_stats_request_to_object_id(stats_type, ver);
+        } else {
+            obj_id = of_stats_reply_to_object_id(stats_type, ver);
+        }
+    }
+
+    return obj_id;
+}
+"""
+
+    # Action types array gen
+    ar_len = type_maps.type_array_len(type_maps.action_types, max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="action", u_name="ACTION", ar_len=ar_len))
+
+    # Action ID types array gen
+    ar_len = type_maps.type_array_len(type_maps.action_id_types, max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="action_id", u_name="ACTION_ID", ar_len=ar_len))
+
+    # Instruction types array gen
+    ar_len = type_maps.type_array_len(type_maps.instruction_types,
+                                      max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="instruction", u_name="INSTRUCTION", ar_len=ar_len))
+
+    # Queue prop types array gen
+    ar_len = type_maps.type_array_len(type_maps.queue_prop_types,
+                                      max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="queue_prop", u_name="QUEUE_PROP", ar_len=ar_len))
+
+    # Table feature prop types array gen
+    ar_len = type_maps.type_array_len(type_maps.table_feature_prop_types,
+                                      max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="table_feature_prop", u_name="TABLE_FEATURE_PROP",
+                   ar_len=ar_len))
+
+    # Meter band types array gen
+    ar_len = type_maps.type_array_len(type_maps.meter_band_types,
+                                      max_type_value)
+    out.write(map_with_experimenter_template % 
+              dict(name="meter_band", u_name="METER_BAND", ar_len=ar_len))
+
+    # Hello elem types array gen
+    ar_len = type_maps.type_array_len(type_maps.hello_elem_types,
+                                      max_type_value)
+    out.write(map_template % 
+              dict(name="hello_elem", u_name="HELLO_ELEM", ar_len=ar_len))
+
+    # Stats types array gen
+    ar_len = type_maps.type_array_len(type_maps.stats_types,
+                                      max_type_value)
+    out.write(stats_template % 
+              dict(name="stats_reply", u_name="STATS_REPLY", ar_len=ar_len))
+    out.write(stats_template % 
+              dict(name="stats_request", u_name="STATS_REQUEST", 
+                   ar_len=ar_len))
+
+    ar_len = type_maps.type_array_len(type_maps.flow_mod_types, max_type_value)
+    out.write(map_template % 
+              dict(name="flow_mod", u_name="FLOW_MOD", ar_len=ar_len))
+
+    ar_len = type_maps.type_array_len(type_maps.oxm_types, max_type_value)
+    out.write("""
+/* NOTE: We could optimize the OXM and only generate OF 1.2 versions. */
+""")
+    out.write(map_template % 
+              dict(name="oxm", u_name="OXM", ar_len=ar_len))
+
+    out.write(experimenter_function)
+    # Must follow stats reply/request
+    ar_len = type_maps.type_array_len(type_maps.message_types, max_type_value)
+    out.write(msg_template % 
+              dict(name="message", u_name="MESSAGE", ar_len=ar_len))
+
+def gen_obj_to_type_map_functions(out):
+    """
+    Generate the static line maps from object IDs to types
+    @param out The file handle to write to
+    """
+
+    ################################################################
+    # Generate object ID to primary type map
+    ################################################################
+
+    out.write("""
+extern int *of_object_to_type_map[OF_VERSION_ARRAY_MAX];
+
+/**
+ * Map an object ID to its primary wire type value
+ * @param id An object ID
+ * @return For message objects, the type value in the OpenFlow header
+ * @return For non-message objects such as actions, instructions, OXMs
+ * returns the type value that appears in the respective sub-header
+ * @return -1 For improper version or out of bounds input
+ *
+ * NOTE that for stats request/reply, returns the header type, not the
+ * sub-type
+ *
+ * Also, note that the value is returned as a signed integer.  So -1 is
+ * an error code, while 0xffff is the usual "experimenter" code.
+ */
+static inline int
+of_object_to_wire_type(of_object_id_t id, of_version_t version)
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return -1;
+    }
+    if (id < 0 || id >= OF_OBJECT_COUNT) {
+        return -1;
+    }
+    return of_object_to_type_map[version][id];
+}
+
+""")
+
+    # Now for experimenter ids
+    out.write("""
+/**
+ * Map from object ID to a triple, (is_extension, experimenter id, subtype)
+ */
+""")
+    out.write("""
+typedef struct of_experimenter_data_s {
+    int is_extension;  /* Boolean indication that this is an extension */
+    uint32_t experimenter_id;
+    uint32_t subtype;
+} of_experimenter_data_t;
+
+""")
+
+    out.write("""
+extern of_experimenter_data_t *of_object_to_extension_data[OF_VERSION_ARRAY_MAX];
+
+/**
+ * Map from the object ID of an extension to the experimenter ID
+ */
+static inline uint32_t
+of_extension_to_experimenter_id(of_object_id_t obj_id, of_version_t ver)
+{
+    if (obj_id < 0 || obj_id > OF_OBJECT_COUNT) {
+        return (uint32_t) -1;
+    }
+    /* @fixme: Verify ver? */
+    return of_object_to_extension_data[ver][obj_id].experimenter_id;
+}
+
+/**
+ * Map from the object ID of an extension to the experimenter subtype
+ */
+static inline uint32_t
+of_extension_to_experimenter_subtype(of_object_id_t obj_id, of_version_t ver)
+{
+    if (obj_id < 0 || obj_id > OF_OBJECT_COUNT) {
+        return (uint32_t) -1;
+    }
+    /* @fixme: Verify ver? */
+    return of_object_to_extension_data[ver][obj_id].subtype;
+}
+
+/**
+ * Boolean function indicating the the given object ID/version
+ * is recognized as a supported (decode-able) extension.
+ */
+static inline int
+of_object_id_is_extension(of_object_id_t obj_id, of_version_t ver)
+{
+    if (obj_id < 0 || obj_id > OF_OBJECT_COUNT) {
+        return (uint32_t) -1;
+    }
+    /* @fixme: Verify ver? */
+    return of_object_to_extension_data[ver][obj_id].is_extension;
+}
+""")
+
+    ################################################################
+    # Generate object ID to the stats sub-type map
+    ################################################################
+
+    out.write("""
+/**
+ * Map an object ID to a stats type
+ * @param id An object ID
+ * @return The wire value for the stats type
+ * @return -1 if not supported for this version
+ * @return -1 if id is not a specific stats type ID
+ *
+ * Note that the value is returned as a signed integer.  So -1 is
+ * an error code, while 0xffff is the usual "experimenter" code.
+ */
+
+static inline int
+of_object_to_stats_type(of_object_id_t id, of_version_t version)
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return -1;
+    }
+    switch (id) {
+""")
+    # Assumes 1.2 contains all stats types and type values are
+    # the same across all versions
+    stats_names = dict()
+    for ver in of_g.of_version_range:
+        for name, value in type_maps.stats_types[ver].items():
+            if name in stats_names and (not value == stats_names[name]):
+                print "ERROR stats type differ violating assumption"
+                sys.exit(1)
+            stats_names[name] = value
+
+    for name, value in stats_names.items():
+        out.write("    case OF_%s_STATS_REPLY:\n" % name.upper())
+        out.write("    case OF_%s_STATS_REQUEST:\n" % name.upper())
+        for version in of_g.of_version_range:
+            if not name in type_maps.stats_types[version]:
+                out.write("        if (version == %s) break;\n" %
+                          of_g.of_version_wire2name[version])
+        out.write("        return %d;\n" % value)
+    out.write("""
+    default:
+        break;
+    }
+    return -1; /* Not recognized as stats type object for this version */
+}
+""")
+
+    ################################################################
+    # Generate object ID to the flow mod sub-type map
+    ################################################################
+
+    out.write("""
+/**
+ * Map an object ID to a flow-mod command value
+ * @param id An object ID
+ * @return The wire value for the flow-mod command
+ * @return -1 if not supported for this version
+ * @return -1 if id is not a specific stats type ID
+ *
+ * Note that the value is returned as a signed integer.  So -1 is
+ * an error code, while 0xffff is the usual "experimenter" code.
+ */
+
+static inline int
+of_object_to_flow_mod_command(of_object_id_t id, of_version_t version)
+{
+    if (!OF_VERSION_OKAY(version)) {
+        return -1;
+    }
+    switch (id) {
+""")
+    # Assumes 1.2 contains all stats types and type values are
+    # the same across all versions
+    flow_mod_names = dict()
+    for ver in of_g.of_version_range:
+        for name, value in type_maps.flow_mod_types[ver].items():
+            if name in flow_mod_names and \
+                    (not value == flow_mod_names[name]):
+                print "ERROR flow mod command differ violating assumption"
+                sys.exit(1)
+            flow_mod_names[name] = value
+
+    for name, value in flow_mod_names.items():
+        out.write("    case OF_FLOW_%s:\n" % name.upper())
+        for version in of_g.of_version_range:
+            if not name in type_maps.flow_mod_types[version]:
+                out.write("        if (version == %s) break;\n" %
+                          of_g.of_version_wire2name[version])
+        out.write("        return %d;\n" % value)
+    out.write("""
+    default:
+        break;
+    }
+    return -1; /* Not recognized as flow mod type object for this version */
+}
+
+""")
+
+def gen_type_maps_header(out):
+    """
+    Generate various header file declarations for type maps
+    @param out The file handle to write to
+    """
+
+    out.write("""
+/**
+ * Generic experimenter type value.  Applies to all except 
+ * top level message: Action, instruction, error, stats, queue_props, oxm
+ */
+#define OF_EXPERIMENTER_TYPE 0xffff
+""")
+    gen_type_to_obj_map_functions(out)
+    gen_obj_to_type_map_functions(out)
+
+    out.write("extern int *of_object_fixed_len[OF_VERSION_ARRAY_MAX];\n")
+
+    out.write("""
+/**
+ * Map a message in a wire buffer object to its OF object id.
+ * @param wbuf Pointer to a wire buffer object, populated with an OF message
+ * @returns The object ID of the message
+ * @returns OF_OBJECT_INVALID if unable to parse the message type
+ */
+
+static inline of_object_id_t
+of_wire_object_id_get(of_wire_buffer_t *wbuf)
+{
+    of_message_t msg;
+
+    msg = (of_message_t)WBUF_BUF(wbuf);
+    return of_message_to_object_id(msg, WBUF_CURRENT_BYTES(wbuf));
+}
+
+/**
+ * Use the type/length from the wire buffer and init the object
+ * @param obj The object being initialized
+ * @param base_object_id If > 0, this indicates the base object
+ * @param max_len If > 0, the max length to expect for the obj
+ * type for inheritance checking
+ * @return OF_ERROR_
+ *
+ * Used for inheritance type objects such as actions and OXMs
+ * The type is checked and if valid, the object is initialized.
+ * Then the length is taken from the buffer.
+ *
+ * Note that the object version must already be properly set.
+ */
+static inline int
+of_object_wire_init(of_object_t *obj, of_object_id_t base_object_id,
+                    int max_len)
+{
+    if (obj->wire_type_get != NULL) {
+        of_object_id_t id;
+        obj->wire_type_get(obj, &id);
+        if (!of_wire_id_valid(id, base_object_id)) {
+            return OF_ERROR_PARSE;
+        }
+        obj->object_id = id;
+        /* Call the init function for this object type; do not push to wire */
+        of_object_init_map[id]((of_object_t *)(obj), obj->version, -1, 0);
+    }
+    if (obj->wire_length_get != NULL) {
+        int length;
+        obj->wire_length_get(obj, &length);
+        if (length < 0 || (max_len > 0 && length > max_len)) {
+            return OF_ERROR_PARSE;
+        }
+        obj->length = length;
+    } else {
+        /* @fixme Does this cover everything else? */
+        obj->length = of_object_fixed_len[obj->version][base_object_id];
+    }
+
+    return OF_ERROR_NONE;
+}
+
+""")
+
+    # Generate the function that sets the object type fields
+    out.write("""
+
+/**
+ * Map a message in a wire buffer object to its OF object id.
+ * @param wbuf Pointer to a wire buffer object, populated with an OF message
+ * @returns The object ID of the message
+ * @returns OF_OBJECT_INVALID if unable to parse the message type
+ *
+ * Version must be set in the buffer prior to calling this routine
+ */
+
+static inline int
+of_wire_message_object_id_set(of_wire_buffer_t *wbuf, of_object_id_t id)
+{
+    int type;
+    of_version_t ver;
+    of_message_t msg;
+
+    msg = (of_message_t)WBUF_BUF(wbuf);
+
+    ver = of_message_version_get(msg);
+
+    /* ASSERT(id is a message object) */
+
+    if ((type = of_object_to_wire_type(id, ver)) < 0) {
+        return OF_ERROR_PARAM;
+    }
+    of_message_type_set(msg, type);
+
+    if ((type = of_object_to_stats_type(id, ver)) >= 0) {
+        /* It's a stats obj */
+        of_message_stats_type_set(msg, type);
+    }
+    if ((type = of_object_to_flow_mod_command(id, ver)) >= 0) {
+        /* It's a flow mod obj */
+        of_message_flow_mod_command_set(msg, ver, type);
+    }
+    if (of_object_id_is_extension(id, ver)) {
+        uint32_t val32;
+
+        /* Set the experimenter and subtype codes */
+        val32 = of_extension_to_experimenter_id(id, ver);
+        of_message_experimenter_id_set(msg, val32);
+        val32 = of_extension_to_experimenter_subtype(id, ver);
+        of_message_experimenter_subtype_set(msg, val32);
+    }
+
+    return OF_ERROR_NONE;
+}
+""")
+
+def gen_type_data_header(out):
+
+    out.write("""
+/****************************************************************
+ *
+ * The following declarations are for type and length calculations.
+ * Implementations may be found in of_type_maps.c
+ *
+ ****************************************************************/
+/*
+ * Special case length functions for objects with
+ */
+""")
+    for ((cls, name), prev) in of_g.special_offsets.items():
+        s_cls = cls[3:] # take off of_
+        out.write("""
+/**
+ * Special length calculation for %(cls)s->%(name)s.
+ * @param obj An object of type %(cls)s to check for 
+ * length of %(name)s
+ * @param bytes[out] Where to store the calculated length
+ *
+ * Preceding data member is %(prev)s.
+ */
+extern int of_length_%(s_cls)s_%(name)s_get(
+    %(cls)s_t *obj, int *bytes);
+
+/**
+ * Special offset calculation for %(cls)s->%(name)s.
+ * @param obj An object of type %(cls)s to check for 
+ * length of %(name)s
+ * @param offset[out] Where to store the calculated length
+ *
+ * Preceding data member is %(prev)s.
+ */
+extern int of_offset_%(s_cls)s_%(name)s_get(
+    %(cls)s_t *obj, int *offset);
+""" % dict(cls=cls, s_cls=s_cls, name=name, prev=prev))
+
+# NOT NEEDED YET
+#     # For non-message, variable length objects, give a fun that
+#     # calculates the length
+#     for cls in of_g.standard_class_order:
+#         s_cls = cls[3:] # take off of_
+#         if !type_is_var_len(cls, version):
+#             continue
+#         out.write("""
+# /**
+#  * Special length calculation for variable length object %(cls)s
+#  * @param obj An object of type %(cls)s whose length is being calculated
+#  * @param bytes[out] Where to store the calculated length
+#  *
+#  * The assumption is that the length member of the object is not
+#  * valid and the length needs to be calculated from other information
+#  * such as the parent.
+#  */
+# extern int of_length_%(s_cls)s_get(
+#     %(cls)s_t *obj, int *bytes);
+# """ % dict(cls=cls, s_cls=s_cls))        
+
+    out.write("""
+/****************************************************************
+ * Wire type/length functions.
+ ****************************************************************/
+
+extern void of_object_message_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_object_message_wire_length_set(of_object_t *obj, int bytes);
+
+extern void of_oxm_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_oxm_wire_length_set(of_object_t *obj, int bytes);
+extern void of_oxm_wire_object_id_get(of_object_t *obj, of_object_id_t *id);
+extern void of_oxm_wire_object_id_set(of_object_t *obj, of_object_id_t id);
+
+extern void of_tlv16_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_tlv16_wire_length_set(of_object_t *obj, int bytes);
+
+extern void of_tlv16_wire_object_id_set(of_object_t *obj, of_object_id_t id);
+
+/* Wire length is uint16 at front of structure */
+extern void of_u16_len_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_u16_len_wire_length_set(of_object_t *obj, int bytes);
+
+extern void of_action_wire_object_id_get(of_object_t *obj, of_object_id_t *id);
+extern void of_action_id_wire_object_id_get(of_object_t *obj, of_object_id_t *id);
+extern void of_instruction_wire_object_id_get(of_object_t *obj, 
+    of_object_id_t *id);
+extern void of_queue_prop_wire_object_id_get(of_object_t *obj, 
+    of_object_id_t *id);
+extern void of_table_feature_prop_wire_object_id_get(of_object_t *obj, 
+    of_object_id_t *id);
+extern void of_meter_band_wire_object_id_get(of_object_t *obj, 
+    of_object_id_t *id);
+extern void of_hello_elem_wire_object_id_get(of_object_t *obj, 
+    of_object_id_t *id);
+
+/** @fixme VERIFY LENGTH IS NUMBER OF BYTES OF ENTRY INCLUDING HDR */
+#define OF_OXM_MASKED_TYPE_GET(hdr) (((hdr) >> 8) & 0xff)
+#define OF_OXM_MASKED_TYPE_SET(hdr, val)                    \\
+    (hdr) = ((hdr) & 0xffff00ff) + (((val) & 0xff) << 8)
+
+#define OF_OXM_LENGTH_GET(hdr) ((hdr) & 0xff)
+#define OF_OXM_LENGTH_SET(hdr, val)                         \\
+    (hdr) = ((hdr) & 0xffffff00) + ((val) & 0xff)
+
+extern void of_packet_queue_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_packet_queue_wire_length_set(of_object_t *obj, int bytes);
+
+extern void of_list_meter_band_stats_wire_length_get(of_object_t *obj,
+                                                    int *bytes);
+extern void of_meter_stats_wire_length_get(of_object_t *obj, int *bytes);
+extern void of_meter_stats_wire_length_set(of_object_t *obj, int bytes);
+extern int of_extension_object_wire_push(of_object_t *obj);
+
+""")
+
+
+def gen_length_array(out):
+    """
+    Generate an array giving the lengths of all objects/versions
+    @param out The file handle to which to write
+    """
+    out.write("""
+/**
+ * An array with the number of bytes in the fixed length part
+ * of each OF object
+ */
+""")
+
+    for version in of_g.of_version_range:
+        out.write("""
+static int\nof_object_fixed_len_v%d[OF_OBJECT_COUNT] = {
+    -1,   /* of_object is not instantiable */
+""" % version)
+        for i, cls in enumerate(of_g.all_class_order):
+            comma = ","
+            if i == len(of_g.all_class_order) - 1:
+                comma = ""
+            val = "-1" + comma
+            if (cls, version) in of_g.base_length:
+                val = str(of_g.base_length[(cls, version)]) + comma
+            out.write("    %-5s /* %d: %s */\n" % (val, i + 1, cls))
+        out.write("};\n")
+
+    out.write("""
+/**
+ * Unified map of fixed length part of each object
+ */
+int *of_object_fixed_len[OF_VERSION_ARRAY_MAX] = {
+    NULL,
+""")
+    for version in of_g.of_version_range:
+        out.write("    of_object_fixed_len_v%d,\n" % version)
+    out.write("""
+};
+""")
+
+    
+################################################################
+################################################################
+
+# THIS IS PROBABLY NOT NEEDED AND MAY NOT BE CALLED CURRENTLY
+def gen_object_id_to_stats_type(out):
+    out.write("""
+/**
+ * Map from message object ID to stats type
+ *
+ * All message object IDs are mapped for simplicity
+ */
+""")
+    for version in of_g.of_version_range:
+        out.write("int *of_object_to_stats_type_map_v%d = {\n" % (i+1))
+        out.write("    -1, /* of_object (invalid) */\n");
+        for cls in of_g.ordered_messages:
+            name = cls[3:]
+            name = name[:name.find("_stats")]
+            if (((cls in type_maps.stats_reply_list) or
+                 (cls in type_maps.stats_request_list)) and
+                name in type_maps.stats_types[i]):
+                out.write("    %d, /* %s */\n" %
+                          (type_maps.stats_types[i][name], cls))
+            else:
+                out.write("    -1, /* %s (invalid) */\n" % cls)
+        out.write("};\n\n")
+
+    out.write("""
+/**
+ * Unified map, indexed by wire version which is 1-based.
+ */
+int *of_object_to_stats_type_map[OF_VERSION_ARRAY_MAX] = {
+    NULL,
+""")
+    for version in of_g.of_version_range:
+        out.write("    of_object_to_stats_type_map_v%d,\n" % version)
+    out.write("""
+};
+""")
+
diff --git a/c_gen/c_validator_gen.py b/c_gen/c_validator_gen.py
new file mode 100644
index 0000000..3ab6acf
--- /dev/null
+++ b/c_gen/c_validator_gen.py
@@ -0,0 +1,302 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Validator function generation
+
+Generates validator function files.
+
+"""
+
+import sys
+import of_g
+import loxi_front_end.match as match
+import loxi_front_end.flags as flags
+from generic_utils import *
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.identifiers as identifiers
+from c_test_gen import var_name_map
+
+def gen_h(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Declarations of message validation functions. These functions check that an
+ * OpenFlow message is well formed. Specifically, they check internal length
+ * fields.
+ */
+
+#if !defined(_LOCI_VALIDATOR_H_)
+#define _LOCI_VALIDATOR_H_
+
+#include <loci/loci.h>
+
+/*
+ * Validate an OpenFlow message.
+ * @return 0 if message is valid, -1 otherwise.
+ */
+extern int of_validate_message(of_message_t msg, int len);
+
+#endif /* _LOCI_VALIDATOR_H_ */
+""")
+
+def gen_c(out, name):
+    loxi_utils.gen_c_copy_license(out)
+    out.write("""
+/**
+ *
+ * AUTOMATICALLY GENERATED FILE.  Edits will be lost on regen.
+ *
+ * Source file for OpenFlow message validation.
+ *
+ */
+
+#include "loci_log.h"
+#include <loci/loci.h>
+#include <loci/loci_validator.h>
+
+#define VALIDATOR_LOG(...) LOCI_LOG_ERROR("Validator Error: " __VA_ARGS__)
+
+""")
+
+    # Declarations
+    for version in of_g.of_version_range:
+        ver_name = loxi_utils.version_to_name(version)
+        for cls in reversed(of_g.standard_class_order):
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            out.write("""
+static inline int %(cls)s_%(ver_name)s_validate(uint8_t *buf, int len);\
+""" % dict(cls=cls, ver_name=ver_name))
+
+    out.write("\n")
+
+    # Definitions
+    for version in of_g.of_version_range:
+        ver_name = loxi_utils.version_to_name(version)
+        for cls in reversed(of_g.standard_class_order):
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            if loxi_utils.class_is_list(cls):
+                gen_list_validator(out, cls, version)
+            else:
+                gen_validator(out, cls, version)
+
+        out.write("""
+int
+of_validate_message_%(ver_name)s(of_message_t msg, int len)
+{
+    of_object_id_t object_id = of_message_to_object_id(msg, len);
+    uint8_t *buf = OF_MESSAGE_TO_BUFFER(msg);
+    switch (object_id) {
+""" % dict(ver_name=ver_name))
+        for cls in reversed(of_g.standard_class_order):
+            if not loxi_utils.class_in_version(cls, version):
+                continue
+            if cls in type_maps.inheritance_map:
+                continue
+            if loxi_utils.class_is_message(cls):
+                out.write("""\
+    case %(cls_id)s:
+        return %(cls)s_%(ver_name)s_validate(buf, len);
+""" % dict(ver_name=ver_name, cls=cls, cls_id=cls.upper()))
+        out.write("""\
+    default:
+        VALIDATOR_LOG("%(cls)s: could not map %(cls_id)s");
+        return -1;
+    }
+}
+""" % dict(ver_name=ver_name, cls=cls, cls_id=cls.upper()))
+
+    out.write("""
+int
+of_validate_message(of_message_t msg, int len)
+{
+    of_version_t version;
+    if (len < OF_MESSAGE_MIN_LENGTH ||
+        len != of_message_length_get(msg)) {
+        VALIDATOR_LOG("message length %d != %d", len,
+                      of_message_length_get(msg));
+        return -1;
+    }
+
+    version = of_message_version_get(msg);
+    switch (version) {
+""")
+
+    for version in of_g.of_version_range:
+        ver_name = loxi_utils.version_to_name(version)
+        out.write("""\
+    case %(ver_name)s:
+        return of_validate_message_%(ver_name)s(msg, len);
+""" % dict(ver_name=ver_name))
+
+    out.write("""\
+    default:
+        VALIDATOR_LOG("Bad version %%d", %(ver_name)s);
+        return -1;
+    }
+}
+""" % dict(ver_name=ver_name))
+
+def gen_validator(out, cls, version):
+    fixed_len = of_g.base_length[(cls, version)];
+    ver_name = loxi_utils.version_to_name(version)
+    out.write("""
+static inline int
+%(cls)s_%(ver_name)s_validate(uint8_t *buf, int len)
+{
+    if (len < %(fixed_len)s) {
+        VALIDATOR_LOG("Class %(cls)s.  Len %%d too small, < %%d", len, %(fixed_len)s);
+        return -1;
+    }
+""" % dict(cls=cls, ver_name=ver_name, cls_id=cls.upper(), fixed_len=fixed_len))
+    members, member_types = loxi_utils.all_member_types_get(cls, version)
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        m_offset = member['offset']
+        m_cls = m_type[:-2] # Trim _t
+        if loxi_utils.skip_member_name(m_name):
+            continue
+        if not loxi_utils.type_is_of_object(m_type):
+            continue
+        if not loxi_utils.class_is_var_len(m_cls, version):
+            continue
+        if cls == "of_packet_out" and m_name == "actions":
+            # See _PACKET_OUT_ACTION_LEN
+            out.write("""
+    {
+        uint16_t %(m_name)s_len;
+        buf_u16_get(buf + %(m_offset)s - 2, &%(m_name)s_len);
+        if (%(m_name)s_len + %(m_offset)s > len) {
+            VALIDATOR_LOG("Class %(cls)s, member %(m_name)s.  "
+                          "Len %%d and offset %%d too big for %%d",
+                          %(m_name)s_len, %(m_offset)s, len);
+            return -1;
+        }
+""" % dict(m_name=m_name, m_offset=m_offset, cls=cls))
+        else:
+            out.write("""
+    
+    {    int %(m_name)s_len = len - %(m_offset)s;
+   
+"""  % dict(m_name=m_name, m_offset=m_offset))
+        out.write("""
+        if (%(m_cls)s_%(ver_name)s_validate(buf + %(m_offset)s, %(m_name)s_len) < 0) {
+            return -1;
+        }
+    }
+""" % dict(m_name=m_name, m_cls=m_cls, ver_name=ver_name, m_offset=m_offset))
+    out.write("""
+    return 0;
+}
+""")
+
+def gen_list_validator(out, cls, version):
+    ver_name = loxi_utils.version_to_name(version)
+    e_cls = loxi_utils.list_to_entry_type(cls)
+    fixed_len = of_g.base_length[(e_cls, version)];
+    out.write("""
+static inline int
+%(cls)s_%(ver_name)s_validate(uint8_t *buf, int len)
+{
+""" % dict(cls=cls, ver_name=ver_name, cls_id=cls.upper(), e_cls=e_cls))
+
+    # TLV16
+    if loxi_utils.class_is_tlv16(e_cls):
+        subclasses = type_maps.inheritance_map[e_cls]
+        out.write("""\
+    while (len >= %(fixed_len)s) {
+        of_object_id_t e_id; 
+        uint16_t e_type, e_len;
+        buf_u16_get(buf, &e_type);
+        buf_u16_get(buf+2, &e_len);
+        e_id = %(e_cls)s_to_object_id(e_type, %(ver_name)s);
+        switch (e_id) {
+""" % dict(fixed_len=fixed_len, ver_name=ver_name, e_cls=e_cls))
+        for subcls in subclasses:
+            subcls = e_cls + '_' + subcls
+            if not loxi_utils.class_in_version(subcls, version):
+                continue
+            out.write("""\
+        case %(subcls_enum)s:
+            if (%(subcls)s_%(ver_name)s_validate(buf, e_len) < 0) {
+                return -1;
+            }
+            break;
+""" % dict(ver_name=ver_name, subcls=subcls, subcls_enum=loxi_utils.enum_name(subcls)))
+        out.write("""\
+        default:
+            return -1;
+        }
+        buf += e_len;
+        len -= e_len;
+    }
+    if (len != 0) {
+        return -1;
+    }
+""" % dict(e_cls=e_cls, ver_name=ver_name))
+
+    # U16 len
+    elif loxi_utils.class_is_u16_len(e_cls) or loxi_utils.class_is_action(e_cls):
+        out.write("""\
+    /* TODO verify U16 len elements */
+""" % dict())
+
+    # OXM
+    elif loxi_utils.class_is_oxm(e_cls):
+        out.write("""\
+    /* TODO verify OXM elements */
+""" % dict())
+
+    # Fixed length
+    elif not loxi_utils.class_is_var_len(e_cls, version):
+        out.write("""\
+    if ((len / %(fixed_len)s) * %(fixed_len)s != len) {
+        return -1;
+    }
+""" % dict(fixed_len=fixed_len))
+
+    # ???
+    else:
+        out.write("""\
+    /* XXX unknown element format */
+""" % dict())
+
+    out.write("""
+    return 0;
+}
+""")
diff --git a/c_gen/templates/.gitignore b/c_gen/templates/.gitignore
new file mode 100644
index 0000000..c3ed10e
--- /dev/null
+++ b/c_gen/templates/.gitignore
@@ -0,0 +1 @@
+*.cache
diff --git a/c_gen/templates/Doxyfile b/c_gen/templates/Doxyfile
new file mode 100644
index 0000000..983aba9
--- /dev/null
+++ b/c_gen/templates/Doxyfile
@@ -0,0 +1,1551 @@
+# Doxyfile 1.6.3
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project
+#
+# All text after a hash (#) is considered a comment and will be ignored
+# The format is:
+#       TAG = value [value, ...]
+# For lists items can also be appended using:
+#       TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (" ")
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all
+# text before the first occurrence of this tag. Doxygen uses libiconv (or the
+# iconv built into libc) for the transcoding. See
+# http://www.gnu.org/software/libiconv for the list of possible encodings.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded
+# by quotes) that should identify the project.
+
+PROJECT_NAME           = "LOCI OpenFlow Interfaces"
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number.
+# This could be handy for archiving the generated documentation or
+# if some version control system is used.
+
+PROJECT_NUMBER         =
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute)
+# base path where the generated documentation will be put.
+# If a relative path is entered, it will be relative to the location
+# where doxygen was started. If left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = doc
+
+# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create
+# 4096 sub-directories (in 2 levels) under the output directory of each output
+# format and will distribute the generated files over these directories.
+# Enabling this option can be useful when feeding doxygen a huge amount of
+# source files, where putting all generated files in the same directory would
+# otherwise cause performance problems for the file system.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# The default language is English, other supported languages are:
+# Afrikaans, Arabic, Brazilian, Catalan, Chinese, Chinese-Traditional,
+# Croatian, Czech, Danish, Dutch, Esperanto, Farsi, Finnish, French, German,
+# Greek, Hungarian, Italian, Japanese, Japanese-en (Japanese with English
+# messages), Korean, Korean-en, Lithuanian, Norwegian, Macedonian, Persian,
+# Polish, Portuguese, Romanian, Russian, Serbian, Serbian-Cyrilic, Slovak,
+# Slovene, Spanish, Swedish, Ukrainian, and Vietnamese.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will
+# include brief member descriptions after the members that are listed in
+# the file and class documentation (similar to JavaDoc).
+# Set to NO to disable this.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend
+# the brief description of a member or function before the detailed description.
+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator
+# that is used to form the text in various listings. Each string
+# in this list, if found as the leading text of the brief description, will be
+# stripped from the text and the result after processing the whole list, is
+# used as the annotated text. Otherwise, the brief description is used as-is.
+# If left blank, the following values are used ("$name" is automatically
+# replaced with the name of the entity): "The $name class" "The $name widget"
+# "The $name file" "is" "provides" "specifies" "contains"
+# "represents" "a" "an" "the"
+
+ABBREVIATE_BRIEF       =
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# Doxygen will generate a detailed section even if there is only a brief
+# description.
+
+ALWAYS_DETAILED_SEC    = NO
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+
+INLINE_INHERITED_MEMB  = NO
+
+# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full
+# path before files name in the file list and in the header files. If set
+# to NO the shortest path that makes the file name unique will be used.
+
+FULL_PATH_NAMES        = YES
+
+# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag
+# can be used to strip a user-defined part of the path. Stripping is
+# only done if one of the specified strings matches the left-hand part of
+# the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the
+# path to strip.
+
+STRIP_FROM_PATH        =
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of
+# the path mentioned in the documentation of a class, which tells
+# the reader which header file to include in order to use a class.
+# If left blank only the name of the header file containing the class
+# definition is used. Otherwise one should specify the include paths that
+# are normally passed to the compiler using the -I flag.
+
+STRIP_FROM_INC_PATH    =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter
+# (but less readable) file names. This can be useful is your file systems
+# doesn't support long names like on DOS, Mac, or CD-ROM.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen
+# will interpret the first line (until the first dot) of a JavaDoc-style
+# comment as the brief description. If set to NO, the JavaDoc
+# comments will behave just like regular Qt-style comments
+# (thus requiring an explicit @brief command for a brief description.)
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then Doxygen will
+# interpret the first line (until the first dot) of a Qt-style
+# comment as the brief description. If set to NO, the comments
+# will behave just like regular Qt-style comments (thus requiring
+# an explicit \brief command for a brief description.)
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen
+# treat a multi-line C++ special comment block (i.e. a block of //! or ///
+# comments) as a brief description. This used to be the default behaviour.
+# The new default is to treat a multi-line C++ comment block as a detailed
+# description. Set this tag to YES if you prefer the old behaviour instead.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented
+# member inherits the documentation from any documented member that it
+# re-implements.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce
+# a new page for each member. If set to NO, the documentation of a member will
+# be part of the file/class/namespace that contains it.
+
+SEPARATE_MEMBER_PAGES  = NO
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab.
+# Doxygen uses this value to replace tabs by spaces in code fragments.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that acts
+# as commands in the documentation. An alias has the form "name=value".
+# For example adding "sideeffect=\par Side Effects:\n" will allow you to
+# put the command \sideeffect (or @sideeffect) in the documentation, which
+# will result in a user-defined paragraph with heading "Side Effects:".
+# You can put \n's in the value part of an alias to insert newlines.
+
+ALIASES                =
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C
+# sources only. Doxygen will then generate output that is more tailored for C.
+# For instance, some of the names that are used will be different. The list
+# of all members will be omitted, etc.
+
+OPTIMIZE_OUTPUT_FOR_C  = YES
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java
+# sources only. Doxygen will then generate output that is more tailored for
+# Java. For instance, namespaces will be presented as packages, qualified
+# scopes will look different, etc.
+
+OPTIMIZE_OUTPUT_JAVA   = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources only. Doxygen will then generate output that is more tailored for
+# Fortran.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for
+# VHDL.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# Doxygen selects the parser to use depending on the extension of the files it parses.
+# With this tag you can assign which parser to use for a given extension.
+# Doxygen has a built-in mapping, but you can override or extend it using this tag.
+# The format is ext=language, where ext is a file extension, and language is one of
+# the parsers supported by doxygen: IDL, Java, Javascript, C#, C, C++, D, PHP,
+# Objective-C, Python, Fortran, VHDL, C, C++. For instance to make doxygen treat
+# .inc files as Fortran files (default is PHP), and .f files as C (default is Fortran),
+# use: inc=Fortran f=C. Note that for custom extensions you also need to set FILE_PATTERNS otherwise the files are not read by doxygen.
+
+EXTENSION_MAPPING      =
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should
+# set this tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string); v.s.
+# func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+
+CPP_CLI_SUPPORT        = NO
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip sources only.
+# Doxygen will parse them like normal C++ but will assume all classes use public
+# instead of private inheritance when no explicit protection keyword is present.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate getter
+# and setter methods for a property. Setting this option to YES (the default)
+# will make doxygen to replace the get and set methods by a property in the
+# documentation. This will only work if the methods are indeed getting or
+# setting a simple type. If this is not the case, or you want to show the
+# methods anyway, you should set this option to NO.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES, then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES (the default) to allow class member groups of
+# the same type (for instance a group of public functions) to be put as a
+# subgroup of that type (e.g. under the Public Functions section). Set it to
+# NO to prevent subgrouping. Alternatively, this can be done per class using
+# the \nosubgrouping command.
+
+SUBGROUPING            = YES
+
+# When TYPEDEF_HIDES_STRUCT is enabled, a typedef of a struct, union, or enum
+# is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically
+# be useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+# The SYMBOL_CACHE_SIZE determines the size of the internal cache use to
+# determine which symbols to keep in memory and which to flush to disk.
+# When the cache is full, less often used symbols will be written to disk.
+# For small to medium size projects (<1000 input files) the default value is
+# probably good enough. For larger projects a too small cache size can cause
+# doxygen to be busy swapping symbols to and from disk most of the time
+# causing a significant performance penality.
+# If the system has enough physical memory increasing the cache will improve the
+# performance by keeping more symbols in memory. Note that the value works on
+# a logarithmic scale so increasing the size by one will rougly double the
+# memory usage. The cache size is given by this formula:
+# 2^(16+SYMBOL_CACHE_SIZE). The valid range is 0..9, the default is 0,
+# corresponding to a cache size of 2^16 = 65536 symbols
+
+SYMBOL_CACHE_SIZE      = 0
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in
+# documentation are documented, even if no documentation was available.
+# Private class members and static file members will be hidden unless
+# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES all private members of a class
+# will be included in the documentation.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_STATIC tag is set to YES all static members of a file
+# will be included in the documentation.
+
+EXTRACT_STATIC         = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs)
+# defined locally in source files will be included in the documentation.
+# If set to NO only classes defined in header files are included.
+
+EXTRACT_LOCAL_CLASSES  = YES
+
+# This flag is only useful for Objective-C code. When set to YES local
+# methods, which are defined in the implementation section but not in
+# the interface are included in the documentation.
+# If set to NO (the default) only methods in the interface are included.
+
+EXTRACT_LOCAL_METHODS  = NO
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base
+# name of the file that contains the anonymous namespace. By default
+# anonymous namespace are hidden.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all
+# undocumented members of documented classes, files or namespaces.
+# If set to NO (the default) these members will be included in the
+# various overviews, but no documentation section is generated.
+# This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_MEMBERS     = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy.
+# If set to NO (the default) these classes will be included in the various
+# overviews. This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_CLASSES     = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all
+# friend (class|struct|union) declarations.
+# If set to NO (the default) these declarations will be included in the
+# documentation.
+
+HIDE_FRIEND_COMPOUNDS  = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any
+# documentation blocks found inside the body of a function.
+# If set to NO (the default) these blocks will be appended to the
+# function's detailed documentation block.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation
+# that is typed after a \internal command is included. If the tag is set
+# to NO (the default) then the documentation will be excluded.
+# Set it to YES to include the internal documentation.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate
+# file names in lower-case letters. If set to YES upper-case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# and Mac users are advised to set this option to NO.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen
+# will show members with their full class and namespace scopes in the
+# documentation. If set to YES the scope will be hidden.
+
+HIDE_SCOPE_NAMES       = NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen
+# will put a list of the files that are included by a file in the documentation
+# of that file.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the FORCE_LOCAL_INCLUDES tag is set to YES then Doxygen
+# will list include files with double quotes in the documentation
+# rather than with sharp brackets.
+
+FORCE_LOCAL_INCLUDES   = NO
+
+# If the INLINE_INFO tag is set to YES (the default) then a tag [inline]
+# is inserted in the documentation for inline members.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen
+# will sort the (detailed) documentation of file and class members
+# alphabetically by member name. If set to NO the members will appear in
+# declaration order.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the
+# brief documentation of file, namespace and class members alphabetically
+# by member name. If set to NO (the default) the members will appear in
+# declaration order.
+
+SORT_BRIEF_DOCS        = YES
+
+# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the (brief and detailed) documentation of class members so that constructors and destructors are listed first. If set to NO (the default) the constructors will appear in the respective orders defined by SORT_MEMBER_DOCS and SORT_BRIEF_DOCS. This tag will be ignored for brief docs if SORT_BRIEF_DOCS is set to NO and ignored for detailed docs if SORT_MEMBER_DOCS is set to NO.
+
+SORT_MEMBERS_CTORS_1ST = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the
+# hierarchy of group names into alphabetical order. If set to NO (the default)
+# the group names will appear in their defined order.
+
+SORT_GROUP_NAMES       = YES
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be
+# sorted by fully-qualified names, including namespaces. If set to
+# NO (the default), the class list will be sorted only by class name,
+# not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the
+# alphabetical list.
+
+SORT_BY_SCOPE_NAME     = NO
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or
+# disable (NO) the todo list. This list is created by putting \todo
+# commands in the documentation.
+
+GENERATE_TODOLIST      = YES
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or
+# disable (NO) the test list. This list is created by putting \test
+# commands in the documentation.
+
+GENERATE_TESTLIST      = YES
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or
+# disable (NO) the bug list. This list is created by putting \bug
+# commands in the documentation.
+
+GENERATE_BUGLIST       = YES
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or
+# disable (NO) the deprecated list. This list is created by putting
+# \deprecated commands in the documentation.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional
+# documentation sections, marked by \if sectionname ... \endif.
+
+ENABLED_SECTIONS       =
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines
+# the initial value of a variable or define consists of for it to appear in
+# the documentation. If the initializer consists of more lines than specified
+# here it will be hidden. Use a value of 0 to hide initializers completely.
+# The appearance of the initializer of individual variables and defines in the
+# documentation can be controlled using \showinitializer or \hideinitializer
+# command in the documentation regardless of this setting.
+
+MAX_INITIALIZER_LINES  = 30
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated
+# at the bottom of the documentation of classes and structs. If set to YES the
+# list will mention the files that were used to generate the documentation.
+
+SHOW_USED_FILES        = YES
+
+# If the sources in your project are distributed over multiple directories
+# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy
+# in the documentation. The default is NO.
+
+SHOW_DIRECTORIES       = NO
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page.
+# This will remove the Files entry from the Quick Index and from the
+# Folder Tree View (if specified). The default is YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the
+# Namespaces page.
+# This will remove the Namespaces entry from the Quick Index
+# and from the Folder Tree View (if specified). The default is YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command <command> <input-file>, where <command> is the value of
+# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file
+# provided by doxygen. Whatever the program writes to standard output
+# is used as the file version. See the manual for examples.
+
+FILE_VERSION_FILTER    =
+
+# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed by
+# doxygen. The layout file controls the global structure of the generated output files
+# in an output format independent way. The create the layout file that represents
+# doxygen's defaults, run doxygen with the -l option. You can optionally specify a
+# file name after the option, if omitted DoxygenLayout.xml will be used as the name
+# of the layout file.
+
+LAYOUT_FILE            =
+
+#---------------------------------------------------------------------------
+# configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated
+# by doxygen. Possible values are YES and NO. If left blank NO is used.
+
+QUIET                  = NO
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated by doxygen. Possible values are YES and NO. If left blank
+# NO is used.
+
+WARNINGS               = YES
+
+# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings
+# for undocumented members. If EXTRACT_ALL is set to YES then this flag will
+# automatically be disabled.
+
+WARN_IF_UNDOCUMENTED   = YES
+
+# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some
+# parameters in a documented function, or documenting parameters that
+# don't exist or using markup commands wrongly.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be abled to get warnings for
+# functions that are documented, but have no documentation for their parameters
+# or return value. If set to NO (the default) doxygen will only warn about
+# wrong or incomplete parameter documentation, but not about the absence of
+# documentation.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that
+# doxygen can produce. The string should contain the $file, $line, and $text
+# tags, which will be replaced by the file and line number from which the
+# warning originated and the warning text. Optionally the format may contain
+# $version, which will be replaced by the version of the file (if it could
+# be obtained via FILE_VERSION_FILTER)
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning
+# and error messages should be written. If left blank the output is written
+# to stderr.
+
+WARN_LOGFILE           =
+
+#---------------------------------------------------------------------------
+# configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag can be used to specify the files and/or directories that contain
+# documented source files. You may enter file names like "myfile.cpp" or
+# directories like "/usr/src/myproject". Separate the files or directories
+# with spaces.
+
+INPUT                  = src inc/loci
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is
+# also the default input encoding. Doxygen uses libiconv (or the iconv built
+# into libc) for the transcoding. See http://www.gnu.org/software/libiconv for
+# the list of possible encodings.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank the following patterns are tested:
+# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx
+# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.py *.f90
+
+FILE_PATTERNS          =
+
+# The RECURSIVE tag can be used to turn specify whether or not subdirectories
+# should be searched for input files as well. Possible values are YES and NO.
+# If left blank NO is used.
+
+RECURSIVE              = NO
+
+# The EXCLUDE tag can be used to specify files and/or directories that should
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+
+EXCLUDE                =
+
+# The EXCLUDE_SYMLINKS tag can be used select whether or not files or
+# directories that are symbolic links (a Unix filesystem feature) are excluded
+# from the input.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories. Note that the wildcards are matched
+# against the file with absolute path, so to exclude all test directories
+# for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       =
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+
+EXCLUDE_SYMBOLS        =
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or
+# directories that contain example code fragments that are included (see
+# the \include command).
+
+EXAMPLE_PATH           =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank all files are included.
+
+EXAMPLE_PATTERNS       =
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude
+# commands irrespective of the value of the RECURSIVE tag.
+# Possible values are YES and NO. If left blank NO is used.
+
+EXAMPLE_RECURSIVE      = NO
+
+# The IMAGE_PATH tag can be used to specify one or more files or
+# directories that contain image that are included in the documentation (see
+# the \image command).
+
+IMAGE_PATH             =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command <filter> <input-file>, where <filter>
+# is the value of the INPUT_FILTER tag, and <input-file> is the name of an
+# input file. Doxygen will then use the output that the filter program writes
+# to standard output.
+# If FILTER_PATTERNS is specified, this tag will be
+# ignored.
+
+INPUT_FILTER           =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis.
+# Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match.
+# The filters are a list of the form:
+# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further
+# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER
+# is applied to all files.
+
+FILTER_PATTERNS        =
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will be used to filter the input files when producing source
+# files to browse (i.e. when SOURCE_BROWSER is set to YES).
+
+FILTER_SOURCE_FILES    = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will
+# be generated. Documented entities will be cross-referenced with these sources.
+# Note: To get rid of all source code in the generated output, make sure also
+# VERBATIM_HEADERS is set to NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body
+# of functions and classes directly in the documentation.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct
+# doxygen to hide any special comment blocks from generated source code
+# fragments. Normal C and C++ comments will always remain visible.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES
+# then for each documented function all documented
+# functions referencing it will be listed.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES
+# then for each documented function all documented entities
+# called/used by that function will be listed.
+
+REFERENCES_RELATION    = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES (the default)
+# and SOURCE_BROWSER tag is set to YES, then the hyperlinks from
+# functions in REFERENCES_RELATION and REFERENCED_BY_RELATION lists will
+# link to the source code.
+# Otherwise they will link to the documentation.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code
+# will point to the HTML generated by the htags(1) tool instead of doxygen
+# built-in source browser. The htags tool is part of GNU's global source
+# tagging system (see http://www.gnu.org/software/global/global.html). You
+# will need version 4.8.6 or higher.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen
+# will generate a verbatim copy of the header file for each class for
+# which an include is specified. Set to NO to disable this.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index
+# of all compounds will be generated. Enable this if the project
+# contains a lot of classes, structs, unions or interfaces.
+
+ALPHABETICAL_INDEX     = NO
+
+# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then
+# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
+# in which this list will be split (can be a number in the range [1..20])
+
+COLS_IN_ALPHA_INDEX    = 5
+
+# In case all classes in a project start with a common prefix, all
+# classes will be put under the same header in the alphabetical index.
+# The IGNORE_PREFIX tag can be used to specify one or more prefixes that
+# should be ignored while generating the index headers.
+
+IGNORE_PREFIX          =
+
+#---------------------------------------------------------------------------
+# configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES (the default) Doxygen will
+# generate HTML output.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `html' will be used as the default path.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for
+# each generated HTML page (for example: .htm,.php,.asp). If it is left blank
+# doxygen will generate files with .html extension.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a personal HTML header for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard header.
+
+HTML_HEADER            =
+
+# The HTML_FOOTER tag can be used to specify a personal HTML footer for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard footer.
+
+HTML_FOOTER            =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading
+# style sheet that is used by each HTML page. It can be used to
+# fine-tune the look of the HTML output. If the tag is left blank doxygen
+# will generate a default style sheet. Note that doxygen will try to copy
+# the style sheet file to the HTML output directory, so don't put your own
+# stylesheet in the HTML output directory as well, or it will be erased!
+
+HTML_STYLESHEET        =
+
+# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
+# page will contain the date and time when the page was generated. Setting
+# this to NO can help when comparing the output of multiple runs.
+
+HTML_TIMESTAMP         = YES
+
+# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes,
+# files or namespaces will be aligned in HTML using tables. If set to
+# NO a bullet list will be used.
+
+HTML_ALIGN_MEMBERS     = YES
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded. For this to work a browser that supports
+# JavaScript and DHTML is required (for instance Mozilla 1.0+, Firefox
+# Netscape 6.0+, Internet explorer 5.0+, Konqueror, or Safari).
+
+HTML_DYNAMIC_SECTIONS  = NO
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files
+# will be generated that can be used as input for Apple's Xcode 3
+# integrated development environment, introduced with OSX 10.5 (Leopard).
+# To create a documentation set, doxygen will generate a Makefile in the
+# HTML output directory. Running make will produce the docset in that
+# directory and running "make install" will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find
+# it at startup.
+# See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html for more information.
+
+GENERATE_DOCSET        = NO
+
+# When GENERATE_DOCSET tag is set to YES, this tag determines the name of the
+# feed. A documentation feed provides an umbrella under which multiple
+# documentation sets from a single provider (such as a company or product suite)
+# can be grouped.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# When GENERATE_DOCSET tag is set to YES, this tag specifies a string that
+# should uniquely identify the documentation set bundle. This should be a
+# reverse domain-name style string, e.g. com.mycompany.MyDocSet. Doxygen
+# will append .docset to the name.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# If the GENERATE_HTMLHELP tag is set to YES, additional index files
+# will be generated that can be used as input for tools like the
+# Microsoft HTML help workshop to generate a compiled HTML help file (.chm)
+# of the generated HTML documentation.
+
+GENERATE_HTMLHELP      = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can
+# be used to specify the file name of the resulting .chm file. You
+# can add a path in front of the file if the result should not be
+# written to the html output directory.
+
+CHM_FILE               =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can
+# be used to specify the location (absolute path including file name) of
+# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run
+# the HTML help compiler on the generated index.hhp.
+
+HHC_LOCATION           =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag
+# controls if a separate .chi index file is generated (YES) or that
+# it should be included in the master .chm file (NO).
+
+GENERATE_CHI           = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_INDEX_ENCODING
+# is used to encode HtmlHelp index (hhk), content (hhc) and project file
+# content.
+
+CHM_INDEX_ENCODING     =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag
+# controls whether a binary table of contents is generated (YES) or a
+# normal table of contents (NO) in the .chm file.
+
+BINARY_TOC             = NO
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members
+# to the contents of the HTML help documentation and to the tree view.
+
+TOC_EXPAND             = NO
+
+# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and QHP_VIRTUAL_FOLDER
+# are set, an additional index file will be generated that can be used as input for
+# Qt's qhelpgenerator to generate a Qt Compressed Help (.qch) of the generated
+# HTML documentation.
+
+GENERATE_QHP           = NO
+
+# If the QHG_LOCATION tag is specified, the QCH_FILE tag can
+# be used to specify the file name of the resulting .qch file.
+# The path specified is relative to the HTML output folder.
+
+QCH_FILE               =
+
+# The QHP_NAMESPACE tag specifies the namespace to use when generating
+# Qt Help Project output. For more information please see
+# http://doc.trolltech.com/qthelpproject.html#namespace
+
+QHP_NAMESPACE          = org.doxygen.Project
+
+# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating
+# Qt Help Project output. For more information please see
+# http://doc.trolltech.com/qthelpproject.html#virtual-folders
+
+QHP_VIRTUAL_FOLDER     = doc
+
+# If QHP_CUST_FILTER_NAME is set, it specifies the name of a custom filter to add.
+# For more information please see
+# http://doc.trolltech.com/qthelpproject.html#custom-filters
+
+QHP_CUST_FILTER_NAME   =
+
+# The QHP_CUST_FILT_ATTRS tag specifies the list of the attributes of the custom filter to add.For more information please see
+# <a href="http://doc.trolltech.com/qthelpproject.html#custom-filters">Qt Help Project / Custom Filters</a>.
+
+QHP_CUST_FILTER_ATTRS  =
+
+# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this project's
+# filter section matches.
+# <a href="http://doc.trolltech.com/qthelpproject.html#filter-attributes">Qt Help Project / Filter Attributes</a>.
+
+QHP_SECT_FILTER_ATTRS  =
+
+# If the GENERATE_QHP tag is set to YES, the QHG_LOCATION tag can
+# be used to specify the location of Qt's qhelpgenerator.
+# If non-empty doxygen will try to run qhelpgenerator on the generated
+# .qhp file.
+
+QHG_LOCATION           =
+
+# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files
+#  will be generated, which together with the HTML files, form an Eclipse help
+#  plugin. To install this plugin and make it available under the help contents
+# menu in Eclipse, the contents of the directory containing the HTML and XML
+# files needs to be copied into the plugins directory of eclipse. The name of
+# the directory within the plugins directory should be the same as
+# the ECLIPSE_DOC_ID value. After copying Eclipse needs to be restarted before the help appears.
+
+GENERATE_ECLIPSEHELP   = NO
+
+# A unique identifier for the eclipse help plugin. When installing the plugin
+# the directory name containing the HTML and XML files should also have
+# this name.
+
+ECLIPSE_DOC_ID         = org.doxygen.Project
+
+# The DISABLE_INDEX tag can be used to turn on/off the condensed index at
+# top of each HTML page. The value NO (the default) enables the index and
+# the value YES disables it.
+
+DISABLE_INDEX          = NO
+
+# This tag can be used to set the number of enum values (range [1..20])
+# that doxygen will group on one line in the generated HTML documentation.
+
+ENUM_VALUES_PER_LINE   = 4
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information.
+# If the tag value is set to YES, a side panel will be generated
+# containing a tree-like index structure (just like the one that
+# is generated for HTML Help). For this to work a browser that supports
+# JavaScript, DHTML, CSS and frames is required (i.e. any modern browser).
+# Windows users are probably better off using the HTML help feature.
+
+GENERATE_TREEVIEW      = NO
+
+# By enabling USE_INLINE_TREES, doxygen will generate the Groups, Directories,
+# and Class Hierarchy pages using a tree view instead of an ordered list.
+
+USE_INLINE_TREES       = NO
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be
+# used to set the initial width (in pixels) of the frame in which the tree
+# is shown.
+
+TREEVIEW_WIDTH         = 250
+
+# Use this tag to change the font size of Latex formulas included
+# as images in the HTML documentation. The default is 10. Note that
+# when you change the font size after a successful doxygen run you need
+# to manually remove any form_*.png images from the HTML output directory
+# to force them to be regenerated.
+
+FORMULA_FONTSIZE       = 10
+
+# When the SEARCHENGINE tag is enabled doxygen will generate a search box for the HTML output. The underlying search engine uses javascript
+# and DHTML and should work on any modern browser. Note that when using HTML help (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) there is already a search function so this one should
+# typically be disabled. For large projects the javascript based search engine
+# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
+
+SEARCHENGINE           = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be implemented using a PHP enabled web server instead of at the web client using Javascript. Doxygen will generate the search PHP script and index
+# file to put on the web server. The advantage of the server based approach is that it scales better to large projects and allows full text search. The disadvances is that it is more difficult to setup
+# and does not have live searching capabilities.
+
+SERVER_BASED_SEARCH    = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will
+# generate Latex output.
+
+GENERATE_LATEX         = YES
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `latex' will be used as the default path.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked. If left blank `latex' will be used as the default command name.
+# Note that when enabling USE_PDFLATEX this option is only used for
+# generating bitmaps for formulas in the HTML output, but not in the
+# Makefile that is written to the output directory.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to
+# generate index for LaTeX. If left blank `makeindex' will be used as the
+# default command name.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact
+# LaTeX documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used
+# by the printer. Possible values are: a4, a4wide, letter, legal and
+# executive. If left blank a4wide will be used.
+
+PAPER_TYPE             = a4wide
+
+# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX
+# packages that should be included in the LaTeX output.
+
+EXTRA_PACKAGES         =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for
+# the generated latex document. The header should contain everything until
+# the first chapter. If it is left blank doxygen will generate a
+# standard header. Notice: only use this tag if you know what you are doing!
+
+LATEX_HEADER           =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated
+# is prepared for conversion to pdf (using ps2pdf). The pdf file will
+# contain links (just like the HTML output) instead of page references
+# This makes the output suitable for online browsing using a pdf viewer.
+
+PDF_HYPERLINKS         = YES
+
+# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of
+# plain latex in the generated Makefile. Set this option to YES to get a
+# higher quality PDF documentation.
+
+USE_PDFLATEX           = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode.
+# command to the generated LaTeX files. This will instruct LaTeX to keep
+# running if errors occur, instead of asking the user for help.
+# This option is also used when generating formulas in HTML.
+
+LATEX_BATCHMODE        = NO
+
+# If LATEX_HIDE_INDICES is set to YES then doxygen will not
+# include the index chapters (such as File Index, Compound Index, etc.)
+# in the output.
+
+LATEX_HIDE_INDICES     = NO
+
+# If LATEX_SOURCE_CODE is set to YES then doxygen will include source code with syntax highlighting in the LaTeX output. Note that which sources are shown also depends on other settings such as SOURCE_BROWSER.
+
+LATEX_SOURCE_CODE      = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output
+# The RTF output is optimized for Word 97 and may not look very pretty with
+# other RTF readers or editors.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `rtf' will be used as the default path.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES Doxygen generates more compact
+# RTF documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated
+# will contain hyperlink fields. The RTF file will
+# contain links (just like the HTML output) instead of page references.
+# This makes the output suitable for online browsing using WORD or other
+# programs which support those fields.
+# Note: wordpad (write) and others do not support links.
+
+RTF_HYPERLINKS         = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's
+# config file, i.e. a series of assignments. You only have to provide
+# replacements, missing definitions are set to their default value.
+
+RTF_STYLESHEET_FILE    =
+
+# Set optional variables used in the generation of an rtf document.
+# Syntax is similar to doxygen's config file.
+
+RTF_EXTENSIONS_FILE    =
+
+#---------------------------------------------------------------------------
+# configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES (the default) Doxygen will
+# generate man pages
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `man' will be used as the default path.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to
+# the generated man pages (default is the subroutine's section .3)
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and Doxygen generates man output,
+# then it will generate one additional man file for each entity
+# documented in the real man page(s). These additional files
+# only source the real man page, but without them the man command
+# would be unable to find the correct page. The default is NO.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES Doxygen will
+# generate an XML file that captures the structure of
+# the code including all documentation.
+
+GENERATE_XML           = NO
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `xml' will be used as the default path.
+
+XML_OUTPUT             = xml
+
+# The XML_SCHEMA tag can be used to specify an XML schema,
+# which can be used by a validating XML parser to check the
+# syntax of the XML files.
+
+XML_SCHEMA             =
+
+# The XML_DTD tag can be used to specify an XML DTD,
+# which can be used by a validating XML parser to check the
+# syntax of the XML files.
+
+XML_DTD                =
+
+# If the XML_PROGRAMLISTING tag is set to YES Doxygen will
+# dump the program listings (including syntax highlighting
+# and cross-referencing information) to the XML output. Note that
+# enabling this will significantly increase the size of the XML output.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will
+# generate an AutoGen Definitions (see autogen.sf.net) file
+# that captures the structure of the code including all
+# documentation. Note that this feature is still experimental
+# and incomplete at the moment.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES Doxygen will
+# generate a Perl module file that captures the structure of
+# the code including all documentation. Note that this
+# feature is still experimental and incomplete at the
+# moment.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES Doxygen will generate
+# the necessary Makefile rules, Perl scripts and LaTeX code to be able
+# to generate PDF and DVI output from the Perl module output.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be
+# nicely formatted so it can be parsed by a human reader.
+# This is useful
+# if you want to understand what is going on.
+# On the other hand, if this
+# tag is set to NO the size of the Perl module output will be much smaller
+# and Perl will parse it just the same.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file
+# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX.
+# This is useful so different doxyrules.make files included by the same
+# Makefile don't overwrite each other's variables.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will
+# evaluate all C-preprocessor directives found in the sources and include
+# files.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro
+# names in the source code. If set to NO (the default) only conditional
+# compilation will be performed. Macro expansion can be done in a controlled
+# way by setting EXPAND_ONLY_PREDEF to YES.
+
+MACRO_EXPANSION        = NO
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES
+# then the macro expansion is limited to the macros specified with the
+# PREDEFINED and EXPAND_AS_DEFINED tags.
+
+EXPAND_ONLY_PREDEF     = NO
+
+# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files
+# in the INCLUDE_PATH (see below) will be search if a #include is found.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by
+# the preprocessor.
+
+INCLUDE_PATH           =
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will
+# be used.
+
+INCLUDE_FILE_PATTERNS  =
+
+# The PREDEFINED tag can be used to specify one or more macro names that
+# are defined before the preprocessor is started (similar to the -D option of
+# gcc). The argument of the tag is a list of macros of the form: name
+# or name=definition (no spaces). If the definition and the = are
+# omitted =1 is assumed. To prevent a macro definition from being
+# undefined via #undef or recursively expanded use the := operator
+# instead of the = operator.
+
+PREDEFINED             =
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
+# this tag can be used to specify a list of macro names that should be expanded.
+# The macro definition that is found in the sources will be used.
+# Use the PREDEFINED tag if you want to use a different macro definition.
+
+EXPAND_AS_DEFINED      =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then
+# doxygen's preprocessor will remove all function-like macros that are alone
+# on a line, have an all uppercase name, and do not end with a semicolon. Such
+# function macros are typically used for boiler-plate code, and will confuse
+# the parser if not removed.
+
+SKIP_FUNCTION_MACROS   = YES
+
+#---------------------------------------------------------------------------
+# Configuration::additions related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES option can be used to specify one or more tagfiles.
+# Optionally an initial location of the external documentation
+# can be added for each tagfile. The format of a tag file without
+# this location is as follows:
+#
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+#
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where "loc1" and "loc2" can be relative or absolute paths or
+# URLs. If a location is present for each tag, the installdox tool
+# does not have to be run to correct the links.
+# Note that each tag file must have a unique name
+# (where the name does NOT include the path)
+# If a tag file is not located in the directory in which doxygen
+# is run, you must also specify the path to the tagfile here.
+
+TAGFILES               =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create
+# a tag file that is based on the input files it reads.
+
+GENERATE_TAGFILE       =
+
+# If the ALLEXTERNALS tag is set to YES all external classes will be listed
+# in the class index. If set to NO only the inherited external classes
+# will be listed.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will
+# be listed.
+
+EXTERNAL_GROUPS        = YES
+
+# The PERL_PATH should be the absolute path and name of the perl script
+# interpreter (i.e. the result of `which perl').
+
+PERL_PATH              = /usr/bin/perl
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will
+# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base
+# or super classes. Setting the tag to NO turns the diagrams off. Note that
+# this option is superseded by the HAVE_DOT option below. This is only a
+# fallback. It is recommended to install and use dot, since it yields more
+# powerful graphs.
+
+CLASS_DIAGRAMS         = YES
+
+# You can define message sequence charts within doxygen comments using the \msc
+# command. Doxygen will then run the mscgen tool (see
+# http://www.mcternan.me.uk/mscgen/) to produce the chart and insert it in the
+# documentation. The MSCGEN_PATH tag allows you to specify the directory where
+# the mscgen tool resides. If left empty the tool is assumed to be found in the
+# default search path.
+
+MSCGEN_PATH            =
+
+# If set to YES, the inheritance and collaboration graphs will hide
+# inheritance and usage relations if the target is undocumented
+# or is not a class.
+
+HIDE_UNDOC_RELATIONS   = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz, a graph visualization
+# toolkit from AT&T and Lucent Bell Labs. The other options in this section
+# have no effect if this option is set to NO (the default)
+
+HAVE_DOT               = NO
+
+# By default doxygen will write a font called FreeSans.ttf to the output
+# directory and reference it in all dot files that doxygen generates. This
+# font does not include all possible unicode characters however, so when you need
+# these (or just want a differently looking font) you can specify the font name
+# using DOT_FONTNAME. You need need to make sure dot is able to find the font,
+# which can be done by putting it in a standard location or by setting the
+# DOTFONTPATH environment variable or by setting DOT_FONTPATH to the directory
+# containing the font.
+
+DOT_FONTNAME           = FreeSans
+
+# The DOT_FONTSIZE tag can be used to set the size of the font of dot graphs.
+# The default size is 10pt.
+
+DOT_FONTSIZE           = 10
+
+# By default doxygen will tell dot to use the output directory to look for the
+# FreeSans.ttf font (which doxygen will put there itself). If you specify a
+# different font using DOT_FONTNAME you can set the path where dot
+# can find it using this tag.
+
+DOT_FONTPATH           =
+
+# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect inheritance relations. Setting this tag to YES will force the
+# the CLASS_DIAGRAMS tag to NO.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect implementation dependencies (inheritance, containment, and
+# class references variables) of the class with other documented classes.
+
+COLLABORATION_GRAPH    = YES
+
+# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for groups, showing the direct groups dependencies
+
+GROUP_GRAPHS           = YES
+
+# If the UML_LOOK tag is set to YES doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+
+UML_LOOK               = NO
+
+# If set to YES, the inheritance and collaboration graphs will show the
+# relations between templates and their instances.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT
+# tags are set to YES then doxygen will generate a graph for each documented
+# file showing the direct and indirect include dependencies of the file with
+# other documented files.
+
+INCLUDE_GRAPH          = YES
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and
+# HAVE_DOT tags are set to YES then doxygen will generate a graph for each
+# documented header file showing the documented files that directly or
+# indirectly include this file.
+
+INCLUDED_BY_GRAPH      = YES
+
+# If the CALL_GRAPH and HAVE_DOT options are set to YES then
+# doxygen will generate a call dependency graph for every global function
+# or class method. Note that enabling this option will significantly increase
+# the time of a run. So in most cases it will be better to enable call graphs
+# for selected functions only using the \callgraph command.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH and HAVE_DOT tags are set to YES then
+# doxygen will generate a caller dependency graph for every global function
+# or class method. Note that enabling this option will significantly increase
+# the time of a run. So in most cases it will be better to enable caller
+# graphs for selected functions only using the \callergraph command.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen
+# will graphical hierarchy of all classes instead of a textual one.
+
+GRAPHICAL_HIERARCHY    = YES
+
+# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES
+# then doxygen will show the dependencies a directory has on other directories
+# in a graphical way. The dependency relations are determined by the #include
+# relations between the files in the directories.
+
+DIRECTORY_GRAPH        = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. Possible values are png, jpg, or gif
+# If left blank png will be used.
+
+DOT_IMAGE_FORMAT       = png
+
+# The tag DOT_PATH can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+
+DOT_PATH               =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the
+# \dotfile command).
+
+DOTFILE_DIRS           =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of
+# nodes that will be shown in the graph. If the number of nodes in a graph
+# becomes larger than this value, doxygen will truncate the graph, which is
+# visualized by representing a node as a red box. Note that doxygen if the
+# number of direct children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note
+# that the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the
+# graphs generated by dot. A depth value of 3 means that only nodes reachable
+# from the root by following a path via at most 3 edges will be shown. Nodes
+# that lay further from the root node will be omitted. Note that setting this
+# option to 1 or 2 may greatly reduce the computation time needed for large
+# code bases. Also note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is disabled by default, because dot on Windows does not
+# seem to support this out of the box. Warning: Depending on the platform used,
+# enabling this option may lead to badly anti-aliased labels on the edges of
+# a graph (i.e. they become hard to read).
+
+DOT_TRANSPARENT        = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10)
+# support this, this feature is disabled by default.
+
+DOT_MULTI_TARGETS      = YES
+
+# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will
+# generate a legend page explaining the meaning of the various boxes and
+# arrows in the dot generated graphs.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will
+# remove the intermediate dot files that are used to generate
+# the various graphs.
+
+DOT_CLEANUP            = YES
diff --git a/c_gen/templates/README b/c_gen/templates/README
new file mode 100644
index 0000000..71c4a48
--- /dev/null
+++ b/c_gen/templates/README
@@ -0,0 +1,73 @@
+Introduction
+============
+
+LOCI is the C language module of LOXI, the Logical OpenFlow Extensible
+Interface. It provides a framework for managing OpenFlow objects in
+an object oriented fashion.
+
+All files in LOCI, even this README, are generated by LOXI. Please make
+modifications to LOXI instead of to these files directly.
+
+Compilation
+===========
+
+It's expected that users of LOCI will want to integrate it into their own build
+system. As an example, here's a simple command line to create a shared library
+on Linux:
+
+    gcc -fPIC -shared -I inc src/*.c -o loci.so
+
+LOCI has no dependencies beyond the C standard library.
+
+Documentation
+=============
+
+A Doxygen configuration file is provided. Just run `doxygen` to generate HTML
+documentation pages. Each OpenFlow object is linked to under the "Modules"
+tab.
+
+Usage
+=====
+
+Currently the best example code for using LOCI is the switchlight-core
+repository.
+
+Here's example code that creates a flow mod:
+
+    /* Create the match */
+    of_match_t match;
+    memset(&match, 0, sizeof(match));
+    match.fields.in_port = 1;
+    OF_MATCH_MASK_IN_PORT_EXACT_SET(&match);
+    match.fields.eth_type = 0x8000;
+    OF_MATCH_MASK_ETH_TYPE_EXACT_SET(&match);
+
+    /* Create a flow mod */
+    of_flow_add_t *flow_add = of_flow_add_new(OF_VERSION_1_0);
+    of_flow_add_idle_timeout_set(flow_add, 5);
+    of_flow_add_cookie_set(flow_add, 42);
+    of_flow_add_priority_set(flow_add, 10000);
+    of_flow_add_buffer_id_set(flow_add, -1);
+    if (of_flow_add_match_set(flow_add, &match)) {
+        fprintf(stderr, "Failed to set the match field\n");
+        abort();
+    }
+
+    /* Populate the action list */
+    of_list_action_t actions;
+    of_flow_add_actions_bind(flow_add, &actions);
+    int i;
+    for (i = 1; i <= 4; i++) {
+        of_action_output_t action;
+        of_action_output_init(&action, flow_add->version, -1, 1);
+        of_list_action_append_bind(&actions, (of_action_t *)&action);
+        of_action_output_port_set(&action, i);
+    }
+
+    /* Use the underlying buffer */
+    void *buf = WBUF_BUF(OF_OBJECT_TO_WBUF(flow_add));
+    uint16_t len = flow_add->length;
+    /* ... */
+
+    /* Free the flow mod */
+    of_flow_add_delete(flow_add);
diff --git a/c_gen/templates/bsn_ext.h b/c_gen/templates/bsn_ext.h
new file mode 100644
index 0000000..6afa211
--- /dev/null
+++ b/c_gen/templates/bsn_ext.h
@@ -0,0 +1,44 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ * BSN OpenFlow extension definition header file
+ */
+
+#ifndef _LOCI_BSN_EXT_H_
+#define _LOCI_BSN_EXT_H_
+
+/* Mirroring for destination, bit in port_config */
+#define OF_PORT_CONFIG_BSN_MIRROR_DEST (1 << 31) /* Mirror destination only */
+
+/* Point at which mirroring is to occur */
+#define OF_BSN_MIRROR_STAGE_INGRESS 0
+#define OF_BSN_MIRROR_STAGE_EGRESS 1
+
+#endif /* _LOCI_BSN_EXT_H_ */
diff --git a/c_gen/templates/loci_dox.h b/c_gen/templates/loci_dox.h
new file mode 100644
index 0000000..86c29db
--- /dev/null
+++ b/c_gen/templates/loci_dox.h
@@ -0,0 +1,55 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**************************************************************************//**
+ * 
+ * loci Doxygen Header
+ * 
+ *****************************************************************************/
+#ifndef __LOCI_DOX_H__
+#define __LOCI_DOX_H__
+
+/**
+ * @defgroup loci loci - loci Description
+ *
+
+The documentation overview for this module should go here. 
+
+ *
+ * @{
+ *
+ * @defgroup loci-loci Public Interface
+ * @defgroup loci-config Compile Time Configuration
+ * @defgroup loci-porting Porting Macros
+ * 
+ * @}
+ *
+ */
+
+#endif /* __LOCI_DOX_H__ */
diff --git a/c_gen/templates/loci_dump.h b/c_gen/templates/loci_dump.h
new file mode 100644
index 0000000..12bba6e
--- /dev/null
+++ b/c_gen/templates/loci_dump.h
@@ -0,0 +1,101 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+#if !defined(_LOCI_DUMP_H_)
+#define _LOCI_DUMP_H_
+
+#include <loci/loci_base.h>
+#include <loci/of_match.h>
+#include <stdio.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+typedef int (*loci_obj_dump_f)(loci_writer_f writer, void *cookie, of_object_t *obj);
+
+/****************************************************************
+ *
+ * Per-datatype dump macros
+ *
+ ****************************************************************/
+#define LOCI_DUMP_u8(writer, cookie, val) writer(cookie, "%u", (val))
+#define LOCI_DUMP_u16(writer, cookie, val) writer(cookie, "%u (0x%x)", (val), (val))
+#define LOCI_DUMP_u32(writer, cookie, val) writer(cookie, "%u (0x%x)", (val), (val))
+#define LOCI_DUMP_u64(writer, cookie, val) writer(cookie, "%" PRIu64 "(0x%" PRIx64 ")", (val), (val))
+
+/* @todo Add checks for special port numbers */
+#define LOCI_DUMP_port_no(writer, cookie, val) LOCI_DUMP_u32(writer, cookie, val)
+#define LOCI_DUMP_fm_cmd(writer, cookie, val) LOCI_DUMP_u16(writer, cookie, val)
+
+/* @todo Decode wildcards */
+#define LOCI_DUMP_wc_bmap(writer, cookie, val)         \
+    writer(cookie, "0x%" PRIx64, (val))
+#define LOCI_DUMP_match_bmap(writer, cookie, val)      \
+    writer(cookie, "0x%" PRIx64, (val))
+
+/* @todo Dump first N bytes of data */
+#define LOCI_DUMP_octets(writer, cookie, val)                                      \
+    writer(cookie, "%d bytes at location %p", (val).bytes, (val).data)
+
+#define LOCI_DUMP_mac(writer, cookie, val)                                 \
+    writer(cookie, "%02x:%02x:%02x:%02x:%02x:%02x",            \
+               (val).addr[0], (val).addr[1], (val).addr[2],     \
+               (val).addr[3], (val).addr[4], (val).addr[5])
+
+#define LOCI_DUMP_ipv4(writer, cookie, val)                                        \
+    writer(cookie, "%d.%d.%d.%d", val >> 24, (val >> 16) & 0xff,       \
+               (val >> 8) & 0xff, val & 0xff)
+
+#define LOCI_DUMP_ipv6(writer, cookie, val)                                        \
+    writer(cookie, "%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x", \
+               (val).addr[0], (val).addr[1], (val).addr[2], (val).addr[3], \
+               (val).addr[4], (val).addr[5], (val).addr[6], (val).addr[7], \
+               (val).addr[8], (val).addr[9], (val).addr[10], (val).addr[11], \
+               (val).addr[12], (val).addr[13], (val).addr[14], (val).addr[15])
+
+#define LOCI_DUMP_string(writer, cookie, val) writer(cookie, "%s", val)
+
+#define LOCI_DUMP_port_name(writer, cookie, val) LOCI_DUMP_string(writer, cookie, val)
+#define LOCI_DUMP_tab_name(writer, cookie, val) LOCI_DUMP_string(writer, cookie, val)
+#define LOCI_DUMP_desc_str(writer, cookie, val) LOCI_DUMP_string(writer, cookie, val)
+#define LOCI_DUMP_ser_num(writer, cookie, val) LOCI_DUMP_string(writer, cookie, val)
+
+int loci_dump_match(loci_writer_f writer, void* cookie, of_match_t *match);
+#define LOCI_DUMP_match(writer, cookie, val) loci_dump_match(writer, cookie, &val)
+
+/**
+ * Generic version for any object
+ */
+int of_object_dump(loci_writer_f writer, void *cookie, of_object_t *obj);
+#endif /* _LOCI_DUMP_H_ */
diff --git a/c_gen/templates/loci_int.h b/c_gen/templates/loci_int.h
new file mode 100644
index 0000000..ce5ecb0
--- /dev/null
+++ b/c_gen/templates/loci_int.h
@@ -0,0 +1,45 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/******************************************************************************
+ *
+ *  /module/src/loci_int.h
+ *
+ *  loci Internal Header
+ *
+ *****************************************************************************/
+#ifndef __LOCI_INT_H__
+#define __LOCI_INT_H__
+
+
+
+
+
+#include <loci/loci.h> 
+#endif /* __LOCI_INT_H__ */
diff --git a/c_gen/templates/loci_log.c b/c_gen/templates/loci_log.c
new file mode 100644
index 0000000..cf34eae
--- /dev/null
+++ b/c_gen/templates/loci_log.c
@@ -0,0 +1,44 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+#include <stdarg.h>
+
+#include "loci_log.h"
+
+#include <loci/loci.h>
+
+static int
+loci_null_logger(loci_log_level_t level,
+                 const char *fname, const char *file, int line,
+                 const char *format, ...)
+{
+    return 0;
+}
+
+loci_logger_f loci_logger = loci_null_logger;
diff --git a/c_gen/templates/loci_log.h b/c_gen/templates/loci_log.h
new file mode 100644
index 0000000..3e22d81
--- /dev/null
+++ b/c_gen/templates/loci_log.h
@@ -0,0 +1,57 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+#if !defined(_LOCI_LOG_H_)
+#define _LOCI_LOG_H_
+
+#include <loci/loci_base.h>
+#include <loci/of_match.h>
+#include <stdio.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+/**
+ * Per level log macros.  printf semantics
+ */
+
+#define LOCI_LOG_COMMON(level, ...) loci_logger(level, __func__, __FILE__, __LINE__, __VA_ARGS__)
+#define LOCI_LOG_TRACE(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_TRACE, __VA_ARGS__)
+#define LOCI_LOG_VERBOSE(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_VERBOSE, __VA_ARGS__)
+#define LOCI_LOG_INFO(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_INFO, __VA_ARGS__)
+#define LOCI_LOG_WARN(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_WARN, __VA_ARGS__)
+#define LOCI_LOG_ERROR(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_ERROR, __VA_ARGS__)
+#define LOCI_LOG_MSG(...) LOCI_LOG_COMMON(LOCI_LOG_LEVEL_MSG, __VA_ARGS__)
+
+#endif /* _LOCI_LOG_H_ */
diff --git a/c_gen/templates/loci_show.h b/c_gen/templates/loci_show.h
new file mode 100644
index 0000000..86efab1
--- /dev/null
+++ b/c_gen/templates/loci_show.h
@@ -0,0 +1,328 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+#if !defined(_LOCI_SHOW_H_)
+#define _LOCI_SHOW_H_
+
+#include <loci/loci_base.h>
+#include <loci/of_match.h>
+#include <stdio.h>
+
+/* g++ requires this to pick up PRI, etc.
+ * See  http://gcc.gnu.org/ml/gcc-help/2006-10/msg00223.html
+ */
+#if !defined(__STDC_FORMAT_MACROS)
+#define __STDC_FORMAT_MACROS
+#endif
+#include <inttypes.h>
+
+typedef int (*loci_obj_show_f)(loci_writer_f writer,
+                               void *cookie, of_object_t *obj);
+
+/****************************************************************
+ *
+ * Per-datatype dump macros
+ *
+ ****************************************************************/
+#define LOCI_SHOW_u8(writer, cookie, val) writer(cookie, "%u", (val))
+#define LOCI_SHOW_u16(writer, cookie, val) writer(cookie, "%u (0x%x)", (val), (val))
+#define LOCI_SHOW_u32(writer, cookie, val) writer(cookie, "%u (0x%x)", (val), (val))
+#define LOCI_SHOW_u64(writer, cookie, val) writer(cookie, "%" PRIu64 "(0x%" PRIx64 ")", (val), (val))
+
+#define LOCI_SHOW_D_INT(cookie, macro, val) writer(cookie, "%" macro , val); 
+#define LOCI_SHOW_X_INT(cookie, macro, val) writer(cookie, "0x%" macro, val); 
+
+#define LOCI_SHOW_x8(writer, cookie, val) LOCI_SHOW_X_INT(cookie,  PRIx8, val)
+#define LOCI_SHOW_x16(writer, cookie, val) LOCI_SHOW_X_INT(cookie, PRIx16, val)
+#define LOCI_SHOW_x32(writer, cookie, val) LOCI_SHOW_X_INT(cookie, PRIx32, val)
+#define LOCI_SHOW_x64(writer, cookie, val) LOCI_SHOW_X_INT(cookie, PRIx64, val)
+#define LOCI_SHOW_d8(writer, cookie, val) LOCI_SHOW_D_INT(cookie, PRId8, val)
+#define LOCI_SHOW_d16(writer, cookie, val) LOCI_SHOW_D_INT(cookie, PRId16, val)
+#define LOCI_SHOW_d32(writer, cookie, val) LOCI_SHOW_D_INT(cookie, PRId32, val)
+#define LOCI_SHOW_d64(writer, cookie, val) LOCI_SHOW_D_INT(cookie, PRId64, val)
+
+
+
+/**
+ * Field-specific show macros. 
+ */
+#define LOCI_SHOW_u32_ipv6_flabel(writer, cookie, val)     LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_vlan_pcp(writer, cookie, val)         LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_src(writer, cookie, val)        LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_ipv6_ipv6_dst(writer, cookie, val)       LOCI_SHOW_ipv6(writer, cookie, val)
+#define LOCI_SHOW_u32_arp_tpa(writer, cookie, val)         LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u8_icmpv6_type(writer, cookie, val)      LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_mac_arp_sha(writer, cookie, val)         LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_ipv6_ipv6_src(writer, cookie, val)       LOCI_SHOW_ipv6(writer, cookie, val)
+#define LOCI_SHOW_u16_sctp_src(writer, cookie, val)        LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u8_icmpv6_code(writer, cookie, val)      LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_mac_eth_dst(writer, cookie, val)         LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_mac_ipv6_nd_sll(writer, cookie, val)     LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u8_mpls_tc(writer, cookie, val)          LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_u16_arp_op(writer, cookie, val)          LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u16_eth_type(writer, cookie, val)        LOCI_SHOW_x16(writer, cookie, val)
+#define LOCI_SHOW_ipv6_ipv6_nd_target(writer, cookie, val) LOCI_SHOW_ipv6(writer, cookie, val)
+#define LOCI_SHOW_u16_vlan_vid(writer, cookie, val)        LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_mac_arp_tha(writer, cookie, val)         LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_port_no_in_port(writer, cookie, val)     LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_u8_ip_dscp(writer, cookie, val)          LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_u16_sctp_dst(writer, cookie, val)        LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u8_icmpv4_code(writer, cookie, val)      LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u16_tcp_src(writer, cookie, val)         LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u32_arp_spa(writer, cookie, val)         LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u8_ip_ecn(writer, cookie, val)           LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u16_udp_dst(writer, cookie, val)         LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_port_no_in_phy_port(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_dst(writer, cookie, val)        LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_mac_eth_src(writer, cookie, val)         LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u16_udp_src(writer, cookie, val)         LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_mac_ipv6_nd_tll(writer, cookie, val)     LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u8_icmpv4_type(writer, cookie, val)      LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_mpls_label(writer, cookie, val)      LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u16_tcp_dst(writer, cookie, val)         LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u8_ip_proto(writer, cookie, val)         LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u64_metadata(writer, cookie, val)        LOCI_SHOW_x64(writer, cookie, val)
+
+
+
+
+/* @todo Add checks for special port numbers */
+#define LOCI_SHOW_port_no(writer, cookie, val) writer(cookie, "%d", val)
+#define LOCI_SHOW_fm_cmd(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+
+/* @todo Decode wildcards */
+#define LOCI_SHOW_wc_bmap(writer, cookie, val)         \
+    writer(cookie, "0x%" PRIx64, (val))
+#define LOCI_SHOW_match_bmap(writer, cookie, val)      \
+    writer(cookie, "0x%" PRIx64, (val))
+
+/* @todo Dump first N bytes of data */
+#define LOCI_SHOW_octets(writer, cookie, val)                                      \
+    writer(cookie, "%d bytes at location %p", (val).bytes, (val).data)
+
+#define LOCI_SHOW_mac(writer, cookie, val)                                 \
+    writer(cookie, "%02x:%02x:%02x:%02x:%02x:%02x",            \
+               (val).addr[0], (val).addr[1], (val).addr[2],     \
+               (val).addr[3], (val).addr[4], (val).addr[5])
+
+#define LOCI_SHOW_ipv4(writer, cookie, val)                                        \
+    writer(cookie, "%d.%d.%d.%d", val >> 24, (val >> 16) & 0xff,       \
+               (val >> 8) & 0xff, val & 0xff)
+
+#define LOCI_SHOW_ipv6(writer, cookie, val)                                        \
+    writer(cookie, "%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x", \
+               (val).addr[0], (val).addr[1], (val).addr[2], (val).addr[3], \
+               (val).addr[4], (val).addr[5], (val).addr[6], (val).addr[7], \
+               (val).addr[8], (val).addr[9], (val).addr[10], (val).addr[11], \
+               (val).addr[12], (val).addr[13], (val).addr[14], (val).addr[15])
+
+#define LOCI_SHOW_string(writer, cookie, val) writer(cookie, "%s", val)
+
+#define LOCI_SHOW_port_name(writer, cookie, val) LOCI_SHOW_string(writer, cookie, val)
+#define LOCI_SHOW_tab_name(writer, cookie, val) LOCI_SHOW_string(writer, cookie, val)
+#define LOCI_SHOW_desc_str(writer, cookie, val) LOCI_SHOW_string(writer, cookie, val)
+#define LOCI_SHOW_ser_num(writer, cookie, val) LOCI_SHOW_string(writer, cookie, val)
+
+int loci_show_match(loci_writer_f writer, void *cookie, of_match_t *match);
+#define LOCI_SHOW_match(writer, cookie, val) loci_show_match(writer, cookie, &val)
+
+/**
+ * Generic version for any object
+ */
+int of_object_show(loci_writer_f writer, void *cookie, of_object_t *obj);
+
+
+
+
+/**
+ * Choose a representation for each field that 
+ * makes the most sense for display to the user. 
+ */
+#define LOCI_SHOW_u32_xid(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u16_flags(writer, cookie, val) LOCI_SHOW_x16(writer, cookie, val)
+#define LOCI_SHOW_u64_packet_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_byte_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u32_flow_count(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_match_match(writer, cookie, val) LOCI_SHOW_match(writer, cookie, val)
+#define LOCI_SHOW_u8_table_id(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_port_no_out_port(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_u32_experimenter(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_subtype(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_index(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_mask(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u8_report_mirror_ports(writer, cookie, val) LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_desc_str_mfr_desc(writer, cookie, val) LOCI_SHOW_desc_str(writer, cookie, val)
+#define LOCI_SHOW_desc_str_hw_desc(writer, cookie, val) LOCI_SHOW_desc_str(writer, cookie, val)
+#define LOCI_SHOW_desc_str_sw_desc(writer, cookie, val) LOCI_SHOW_desc_str(writer, cookie, val)
+#define LOCI_SHOW_ser_num_serial_num(writer, cookie, val) LOCI_SHOW_ser_num(writer, cookie, val)
+#define LOCI_SHOW_desc_str_dp_desc(writer, cookie, val) LOCI_SHOW_desc_str(writer, cookie, val)
+#define LOCI_SHOW_octets_data(writer, cookie, val) LOCI_SHOW_octets(writer, cookie, val)
+#define LOCI_SHOW_u16_err_type(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u16_code(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u64_datapath_id(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u32_n_buffers(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_n_tables(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_capabilities(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_actions(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u64_cookie(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u16_idle_timeout(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u16_hard_timeout(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u16_priority(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u32_buffer_id(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_reason(writer, cookie, val) LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_u32_duration_sec(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_duration_nsec(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u16_miss_send_len(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u32_role(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u16_total_len(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_port_no_port_no(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_mac_hw_addr(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u32_config(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_advertise(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_port_no_port(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_u32_queue_id(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_dest_port(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_vlan_tag(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_copy_stage(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u16_max_len(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_mac_dl_addr(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u32_nw_addr(writer, cookie, val) LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u8_nw_tos(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u16_tp_port(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_wc_bmap_wildcards(writer, cookie, val) LOCI_SHOW_wc_bmap(writer, cookie, val)
+#define LOCI_SHOW_port_name_name(writer, cookie, val) LOCI_SHOW_port_name(writer, cookie, val)
+#define LOCI_SHOW_u32_state(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_curr(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_advertised(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_supported(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_peer(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_packets(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_tx_packets(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_bytes(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_tx_bytes(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_dropped(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_tx_dropped(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_errors(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_tx_errors(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_frame_err(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_over_err(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_rx_crc_err(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_collisions(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u16_rate(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_tab_name_name(writer, cookie, val) LOCI_SHOW_tab_name(writer, cookie, val)
+#define LOCI_SHOW_u32_max_entries(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_active_count(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u64_lookup_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_matched_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u32_out_group(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u64_cookie_mask(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u32_reserved(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u16_command(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u8_group_type(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_group_id(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u16_ethertype(writer, cookie, val) LOCI_SHOW_x16(writer, cookie, val)
+#define LOCI_SHOW_u8_mpls_ttl(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u8_nw_ecn(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u8_nw_ttl(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u16_weight(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_port_no_watch_port(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_u32_watch_group(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_ref_count(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u64_metadata_mask(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_mac_eth_src_mask(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_mac_eth_dst_mask(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_src_mask(writer, cookie, val) LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_dst_mask(writer, cookie, val) LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u32_curr_speed(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_max_speed(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_match_bmap_match(writer, cookie, val) LOCI_SHOW_match_bmap(writer, cookie, val)
+#define LOCI_SHOW_u32_instructions(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_write_actions(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_apply_actions(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_types(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_max_groups_all(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_max_groups_select(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_max_groups_indirect(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_max_groups_ff(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_actions_all(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_actions_select(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_actions_indirect(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_actions_ff(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u64_generation_id(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_octets_field(writer, cookie, val) LOCI_SHOW_octets(writer, cookie, val)
+#define LOCI_SHOW_u16_value(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u16_value_mask(writer, cookie, val) LOCI_SHOW_x16(writer, cookie, val)
+#define LOCI_SHOW_mac_value(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_mac_value_mask(writer, cookie, val) LOCI_SHOW_mac(writer, cookie, val)
+#define LOCI_SHOW_u32_value(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_value_mask(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_oxm_header(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u8_value(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u8_value_mask(writer, cookie, val) LOCI_SHOW_x8(writer, cookie, val)
+#define LOCI_SHOW_port_no_value(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_port_no_value_mask(writer, cookie, val) LOCI_SHOW_port_no(writer, cookie, val)
+#define LOCI_SHOW_ipv6_value(writer, cookie, val) LOCI_SHOW_ipv6(writer, cookie, val)
+#define LOCI_SHOW_ipv6_value_mask(writer, cookie, val) LOCI_SHOW_ipv6(writer, cookie, val)
+#define LOCI_SHOW_u64_value(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_value_mask(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u64_write_setfields(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u64_apply_setfields(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u64_metadata_match(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u64_metadata_write(writer, cookie, val) LOCI_SHOW_x64(writer, cookie, val)
+#define LOCI_SHOW_u32_packet_in_mask_equal_master(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_packet_in_mask_slave(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_port_status_mask_equal_master(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_port_status_mask_slave(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_flow_removed_mask_equal_master(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u32_flow_removed_mask_slave(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u8_auxiliary_id(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u32_meter_id(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_rate(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_burst_size(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u8_prec_level(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u64_packet_band_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_byte_band_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u32_max_meter(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_band_types(writer, cookie, val) LOCI_SHOW_x32(writer, cookie, val)
+#define LOCI_SHOW_u8_max_bands(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u8_max_color(writer, cookie, val) LOCI_SHOW_u8(writer, cookie, val)
+#define LOCI_SHOW_u64_packet_in_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_u64_byte_in_count(writer, cookie, val) LOCI_SHOW_u64(writer, cookie, val)
+#define LOCI_SHOW_octets_experimenter_data(writer, cookie, val) LOCI_SHOW_octets(writer, cookie, val)
+#define LOCI_SHOW_u32_dst(writer, cookie, val)        LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u32_service(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u32_status(writer, cookie, val) LOCI_SHOW_u32(writer, cookie, val)
+#define LOCI_SHOW_u16_subtype(writer, cookie, val) LOCI_SHOW_u16(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_addr(writer, cookie, val) LOCI_SHOW_ipv4(writer, cookie, val)
+#define LOCI_SHOW_u32_ipv4_netmask(writer, cookie, val) LOCI_SHOW_ipv4(writer, cookie, val)
+
+
+
+
+#endif /* _LOCI_SHOW_H_ */
diff --git a/c_gen/templates/locitest/locitest_config.c b/c_gen/templates/locitest/locitest_config.c
new file mode 100644
index 0000000..c7c6fa2
--- /dev/null
+++ b/c_gen/templates/locitest/locitest_config.c
@@ -0,0 +1,49 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+/******************************************************************************
+ *
+ *  /module/src/locitest_config.c
+ *
+ *  locitest Config Information
+ *
+ *****************************************************************************/
+#include <locitest/locitest_config.h> 
+#include <locitest/locitest.h> 
+#include "locitest_int.h" 
+#include <stdlib.h>
+#include <string.h>
+
+
+
+/* <auto.start.cdefs(LOCITEST_CONFIG_HEADER).source> */
+/* <auto.end.cdefs(LOCITEST_CONFIG_HEADER).source> */
+
+
+
diff --git a/c_gen/templates/locitest/locitest_enums.c b/c_gen/templates/locitest/locitest_enums.c
new file mode 100644
index 0000000..7c3f074
--- /dev/null
+++ b/c_gen/templates/locitest/locitest_enums.c
@@ -0,0 +1,47 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+/******************************************************************************
+ *
+ *  /module/src/locitest_enums.c
+ *
+ *  locitest Enum Definitions
+ *
+ *****************************************************************************/
+#include <locitest/locitest_config.h> 
+#include <locitest/locitest.h> 
+#include "locitest_int.h" 
+
+
+
+#include <stdlib.h>
+#include <string.h>
+
+/* <auto.start.enum(ALL).source> */
+/* <auto.end.enum(ALL).source> */
diff --git a/c_gen/templates/locitest/locitest_int.h b/c_gen/templates/locitest/locitest_int.h
new file mode 100644
index 0000000..b058604
--- /dev/null
+++ b/c_gen/templates/locitest/locitest_int.h
@@ -0,0 +1,46 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+/******************************************************************************
+ *
+ *  /module/src/locitest_int.h
+ *
+ *  locitest Internal Header
+ *
+ *****************************************************************************/
+#ifndef __LOCITEST_INT_H__
+#define __LOCITEST_INT_H__
+
+
+#include <locitest/locitest_config.h>
+
+
+
+#include <locitest/locitest.h> 
+#endif /* __LOCITEST_INT_H__ */
diff --git a/c_gen/templates/locitest/test_ext.c b/c_gen/templates/locitest/test_ext.c
new file mode 100644
index 0000000..1efcaf6
--- /dev/null
+++ b/c_gen/templates/locitest/test_ext.c
@@ -0,0 +1,69 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2012, Big Switch Networks, Inc. */
+
+/**
+ * Test extensions
+ */
+
+#include <locitest/test_common.h>
+
+/**
+ * Simple tests for extension objects
+ */
+
+int
+test_ext_objs(void)
+{
+    of_action_bsn_mirror_t *obj;
+
+    obj = of_action_bsn_mirror_new(OF_VERSION_1_0);
+    TEST_ASSERT(obj != NULL);
+    TEST_ASSERT(obj->object_id == OF_ACTION_BSN_MIRROR);
+
+    TEST_ASSERT(of_action_to_object_id(OF_EXPERIMENTER_TYPE, OF_VERSION_1_0) ==
+                OF_ACTION_EXPERIMENTER);
+
+    TEST_ASSERT(of_action_id_to_object_id(OF_EXPERIMENTER_TYPE, OF_VERSION_1_0) ==
+                OF_ACTION_ID_EXPERIMENTER);
+
+    TEST_ASSERT(of_instruction_to_object_id(OF_EXPERIMENTER_TYPE, OF_VERSION_1_0) ==
+                OF_INSTRUCTION_EXPERIMENTER);
+
+    TEST_ASSERT(of_queue_prop_to_object_id(OF_EXPERIMENTER_TYPE, OF_VERSION_1_0) ==
+                OF_QUEUE_PROP_EXPERIMENTER);
+
+    TEST_ASSERT(of_meter_band_to_object_id(OF_EXPERIMENTER_TYPE, OF_VERSION_1_0) ==
+                OF_METER_BAND_EXPERIMENTER);
+
+    TEST_ASSERT(of_table_feature_prop_to_object_id(OF_EXPERIMENTER_TYPE,
+                                                   OF_VERSION_1_3) ==
+                OF_TABLE_FEATURE_PROP_EXPERIMENTER);
+
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/test_list_limits.c b/c_gen/templates/locitest/test_list_limits.c
new file mode 100644
index 0000000..aef91d8
--- /dev/null
+++ b/c_gen/templates/locitest/test_list_limits.c
@@ -0,0 +1,100 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ * Test that list append fails gracefully when running out of wire buffer
+ * space.
+ */
+
+#include <locitest/test_common.h>
+
+static int
+test_list_limits(void)
+{
+    of_flow_stats_reply_t *obj = of_flow_stats_reply_new(OF_VERSION_1_0);
+    of_list_flow_stats_entry_t list;
+    of_flow_stats_entry_t *element = of_flow_stats_entry_new(OF_VERSION_1_0);
+    int i = 0;
+
+    of_flow_stats_reply_entries_bind(obj, &list);
+
+    ASSERT(element != NULL);
+
+
+    while (1) {
+        int rv = of_list_flow_stats_entry_append(&list, element);
+        ASSERT(rv == OF_ERROR_NONE || rv == OF_ERROR_RESOURCE);
+        if (rv != OF_ERROR_NONE) {
+            break;
+        }
+        i++;
+    }
+
+    ASSERT(i == 744);
+
+    of_flow_stats_entry_delete(element);
+    of_flow_stats_reply_delete(obj);
+    return TEST_PASS;
+}
+
+static int
+test_list_limits_bind(void)
+{
+    of_flow_stats_reply_t *obj = of_flow_stats_reply_new(OF_VERSION_1_0);
+    of_list_flow_stats_entry_t list;
+    int i = 0;
+    of_flow_stats_reply_entries_bind(obj, &list);
+
+
+    while (1) {
+        of_flow_stats_entry_t element;
+        int rv; 
+        of_flow_stats_entry_init(&element, OF_VERSION_1_0, -1, 1);
+        rv = of_list_flow_stats_entry_append_bind(&list, &element);
+        ASSERT(rv == OF_ERROR_NONE || rv == OF_ERROR_RESOURCE);
+        if (rv != OF_ERROR_NONE) {
+            break;
+        }
+        i++;
+    }
+
+    ASSERT(i == 744);
+
+    of_flow_stats_reply_delete(obj);
+    return TEST_PASS;
+}
+
+int
+run_list_limits_tests(void)
+{
+    RUN_TEST(list_limits);
+    RUN_TEST(list_limits_bind);
+
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/test_match_utils.c b/c_gen/templates/locitest/test_match_utils.c
new file mode 100644
index 0000000..6e4a95e
--- /dev/null
+++ b/c_gen/templates/locitest/test_match_utils.c
@@ -0,0 +1,222 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ *
+ * Match utility test cases
+ *
+ */
+
+#include <locitest/test_common.h>
+
+int
+test_match_utils(void)
+{
+    of_mac_addr_t m1 = {{0,0,1,1,3,3}};
+    of_mac_addr_t m2 = {{0,0,1,1,3,3}};  /* m1 == m2 */
+    of_mac_addr_t m3 = {{0,0,1,1,1,1}};  /* m1 is more specific than m3 */
+    of_mac_addr_t m4 = {{0xf,0,1,1,3,3}};  /* m1 is not more specific m4 */
+    of_mac_addr_t m5 = {{0,0,1,1,3,0xf}};  /* m1 is not more specific m5 */
+
+    of_mac_addr_t m_mask1 = {{0xff, 0xff, 0xff, 0xff, 0xff, 0xff}};
+    of_mac_addr_t m_mask2 = {{0xff, 0xff, 0xff, 0xff, 0, 0}};
+
+    /* m1 matches m2 for mask1 */
+    /* m1 matches m2 for mask2 */
+    /* m1 does not match m3, m4, m5 for mask1 */
+    /* m1 matches m3, m5 for mask2 */
+    /* m1 does not match m4 for mask2 */
+
+    of_ipv6_t i1 = {{0,0,0,0,1,1,1,1,3,3,3,3,7,7,7,7}};  /* same as above */
+    of_ipv6_t i2 = {{0,0,0,0,1,1,1,1,3,3,3,3,7,7,7,7}};  
+    of_ipv6_t i3 = {{0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1}};
+    of_ipv6_t i4 = {{0xf,0,0,0,1,1,1,1,3,3,3,3,7,7,7,7}};
+    of_ipv6_t i5 = {{0,0,0,0,1,1,1,1,3,3,3,3,7,7,7,0xf}};
+
+    of_ipv6_t i_mask1 = {{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+                          0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}};
+    of_ipv6_t i_mask2 = {{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+                          0, 0, 0, 0, 0, 0, 0, 0}};
+
+    /* Same relationships as above */
+
+    uint32_t v1 = 3;  /* same as above */
+    uint32_t v2 = 3;
+    uint32_t v3 = 1;
+    uint32_t v4 = 4;
+    uint32_t v5 = (1 << 31) | 1;
+
+    uint32_t u32_mask1 = -1;
+    uint32_t u32_mask2 = 1;
+
+    uint64_t w1 = 3;  /* same as above */
+    uint64_t w2 = 3;
+    uint64_t w3 = 1;
+    uint64_t w4 = 4;
+    uint64_t w5 = (1LL << 63) | 1LL;
+
+    uint64_t u64_mask1 = -1;
+    uint64_t u64_mask2 = 1;
+
+    /* Match structures */
+    of_match_t match1, match2;
+
+    TEST_ASSERT(OF_MORE_SPECIFIC_MAC_ADDR(&m1, &m2));
+    TEST_ASSERT(OF_MORE_SPECIFIC_MAC_ADDR(&m1, &m3));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_MAC_ADDR(&m1, &m4));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_MAC_ADDR(&m1, &m5));
+
+    TEST_ASSERT(OF_MORE_SPECIFIC_IPV6(&i1, &i2));
+    TEST_ASSERT(OF_MORE_SPECIFIC_IPV6(&i1, &i3));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_IPV6(&i1, &i4));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_IPV6(&i1, &i5));
+
+    TEST_ASSERT(OF_MORE_SPECIFIC_INT(v1, v2));
+    TEST_ASSERT(OF_MORE_SPECIFIC_INT(v1, v3));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_INT(v1, v4));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_INT(v1, v5));
+
+    TEST_ASSERT(OF_MORE_SPECIFIC_INT(w1, w2));
+    TEST_ASSERT(OF_MORE_SPECIFIC_INT(w1, w3));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_INT(w1, w4));
+    TEST_ASSERT(!OF_MORE_SPECIFIC_INT(w1, w5));
+
+    /* Test restricted matches on macs */
+    TEST_ASSERT(OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m2, &m_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m3, &m_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m4, &m_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m5, &m_mask1));
+
+    TEST_ASSERT(OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m2, &m_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m3, &m_mask2));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m4, &m_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_MAC_ADDR(&m1, &m5, &m_mask2));
+
+    /* Test overlap */
+    TEST_ASSERT(OF_OVERLAP_MAC_ADDR(&m1, &m2, &m_mask1, &m_mask2));
+    TEST_ASSERT(OF_OVERLAP_MAC_ADDR(&m1, &m3, &m_mask1, &m_mask2));
+    TEST_ASSERT(!OF_OVERLAP_MAC_ADDR(&m1, &m3, &m_mask1, &m_mask1));
+    TEST_ASSERT(!OF_OVERLAP_MAC_ADDR(&m1, &m4, &m_mask1, &m_mask2));
+    TEST_ASSERT(OF_OVERLAP_MAC_ADDR(&m1, &m5, &m_mask1, &m_mask2));
+
+    /* Test restricted matches on ipv6 */
+    TEST_ASSERT(OF_RESTRICTED_MATCH_IPV6(&i1, &i2, &i_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_IPV6(&i1, &i3, &i_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_IPV6(&i1, &i4, &i_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_IPV6(&i1, &i5, &i_mask1));
+
+    TEST_ASSERT(OF_RESTRICTED_MATCH_IPV6(&i1, &i2, &i_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_IPV6(&i1, &i3, &i_mask2));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_IPV6(&i1, &i4, &i_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_IPV6(&i1, &i5, &i_mask2));
+
+    /* Test overlap */
+    TEST_ASSERT(OF_OVERLAP_IPV6(&i1, &i2, &i_mask1, &i_mask2));
+    TEST_ASSERT(OF_OVERLAP_IPV6(&i1, &i3, &i_mask1, &i_mask2));
+    TEST_ASSERT(!OF_OVERLAP_IPV6(&i1, &i3, &i_mask1, &i_mask1));
+    TEST_ASSERT(!OF_OVERLAP_IPV6(&i1, &i4, &i_mask1, &i_mask2));
+    TEST_ASSERT(OF_OVERLAP_IPV6(&i1, &i5, &i_mask1, &i_mask2));
+
+    /* Test restricted matches on uint32 */
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(v1, v2, u32_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(v1, v3, u32_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(v1, v4, u32_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(v1, v5, u32_mask1));
+
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(v1, v2, u32_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(v1, v3, u32_mask2));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(v1, v4, u32_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(v1, v5, u32_mask2));
+
+    /* Test overlap */
+    TEST_ASSERT(OF_OVERLAP_INT(v1, v2, u32_mask1, u32_mask2));
+    TEST_ASSERT(OF_OVERLAP_INT(v1, v3, u32_mask1, u32_mask2));
+    TEST_ASSERT(!OF_OVERLAP_INT(v1, v3, u32_mask1, u32_mask1));
+    TEST_ASSERT(!OF_OVERLAP_INT(v1, v4, u32_mask1, u32_mask2));
+    TEST_ASSERT(OF_OVERLAP_INT(v1, v5, u32_mask1, u32_mask2));
+
+    /* Test restricted matches on uint64 */
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(w1, w2, u64_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(w1, w3, u64_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(w1, w4, u64_mask1));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(w1, w5, u64_mask1));
+
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(w1, w2, u64_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(w1, w3, u64_mask2));
+    TEST_ASSERT(!OF_RESTRICTED_MATCH_INT(w1, w4, u64_mask2));
+    TEST_ASSERT(OF_RESTRICTED_MATCH_INT(w1, w5, u64_mask2));
+
+    /* Test overlap */
+    TEST_ASSERT(OF_OVERLAP_INT(w1, w2, u64_mask1, u64_mask2));
+    TEST_ASSERT(OF_OVERLAP_INT(w1, w3, u64_mask1, u64_mask2));
+    TEST_ASSERT(!OF_OVERLAP_INT(w1, w3, u64_mask1, u64_mask1));
+    TEST_ASSERT(!OF_OVERLAP_INT(w1, w4, u64_mask1, u64_mask2));
+    TEST_ASSERT(OF_OVERLAP_INT(w1, w5, u64_mask1, u64_mask2));
+
+    /* Test match stuctures */
+    of_match_populate(&match1, OF_VERSION_1_2, 1);
+    of_match_populate(&match2, OF_VERSION_1_2, 1);
+    TEST_ASSERT(of_match_eq(&match1, &match2));
+    TEST_ASSERT(of_match_eq(&match2, &match1));
+    TEST_ASSERT(of_match_more_specific(&match1, &match2));
+    TEST_ASSERT(of_match_more_specific(&match2, &match1));
+    TEST_ASSERT(of_match_overlap(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match2, &match1));
+
+    /* Change match2 so it still is extended by match1 */
+    memset(&match2.masks.eth_dst, 0, sizeof(of_mac_addr_t));
+    TEST_ASSERT(of_match_more_specific(&match1, &match2));
+    TEST_ASSERT(!of_match_more_specific(&match2, &match1));
+    TEST_ASSERT(!of_match_eq(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match2, &match1));
+
+    /* Now change a value so that matches disagree */
+    match2.fields.in_port++;
+    TEST_ASSERT(!of_match_more_specific(&match1, &match2));
+    TEST_ASSERT(!of_match_overlap(&match1, &match2));
+    TEST_ASSERT(!of_match_overlap(&match2, &match1));
+    /* Clear inport mask on match2 and should extend again */
+    match2.masks.in_port = 0;
+    TEST_ASSERT(of_match_more_specific(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match2, &match1));
+
+    /* Now change mask so the overlap, but not more specific */
+    match1.fields.in_port = 0x7;
+    match1.masks.in_port = 0x7;
+    match2.fields.in_port = 0xe;
+    match2.masks.in_port = 0xe;
+    TEST_ASSERT(!of_match_more_specific(&match1, &match2));
+    TEST_ASSERT(!of_match_more_specific(&match2, &match1));
+    TEST_ASSERT(of_match_overlap(&match1, &match2));
+    TEST_ASSERT(of_match_overlap(&match2, &match1));
+    
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/test_setup_from_add.c b/c_gen/templates/locitest/test_setup_from_add.c
new file mode 100644
index 0000000..6988572
--- /dev/null
+++ b/c_gen/templates/locitest/test_setup_from_add.c
@@ -0,0 +1,132 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ * Test code for setup from flow add routines
+ */
+
+#include <locitest/test_common.h>
+
+#if !defined(__APPLE__)
+#include <mcheck.h>
+#define MCHECK_INIT mcheck(NULL)
+#else /* mcheck not available under OS X */
+#define MCHECK_INIT do { } while (0)
+#endif
+
+
+static int
+test_removed_setup_from_add(void)
+{
+    of_flow_removed_t *removed;
+    of_flow_add_t *add;
+    of_match_t m1, m2;
+
+    TEST_ASSERT((add = of_flow_add_new(OF_VERSION_1_0)) != NULL);
+    TEST_ASSERT((removed = of_flow_removed_new(OF_VERSION_1_0)) != NULL);
+
+    TEST_ASSERT(of_flow_add_OF_VERSION_1_0_populate(add, 1) != 0);
+    TEST_ASSERT(of_flow_add_match_get(add, &m1) == 0);
+
+    TEST_ASSERT(of_flow_removed_setup_from_flow_add(removed, add) == 0);
+    TEST_ASSERT(of_flow_removed_match_get(removed, &m2) == 0);
+    TEST_ASSERT(memcmp(&m1, &m2, sizeof(m1)) == 0);
+
+    of_flow_add_delete(add);
+    of_flow_removed_delete(removed);
+
+    return TEST_PASS;
+}
+
+
+static int
+test_stats_entry_setup_from_add(void)
+{
+    of_flow_add_t *add;
+    of_flow_stats_entry_t *entry;
+    of_match_t m1, m2;
+    of_list_action_t *list;
+    of_list_action_t list_out;
+
+    TEST_ASSERT((add = of_flow_add_new(OF_VERSION_1_0)) != NULL);
+    TEST_ASSERT((entry = of_flow_stats_entry_new(OF_VERSION_1_0)) != NULL);
+
+    TEST_ASSERT(of_flow_add_OF_VERSION_1_0_populate(add, 1) != 0);
+    TEST_ASSERT(of_flow_add_match_get(add, &m1) == 0);
+
+    TEST_ASSERT(of_flow_stats_entry_setup_from_flow_add(entry, add, NULL) == 0);
+    TEST_ASSERT(of_flow_stats_entry_match_get(entry, &m2) == 0);
+    TEST_ASSERT(memcmp(&m1, &m2, sizeof(m1)) == 0);
+
+    of_flow_add_delete(add);
+    of_flow_stats_entry_delete(entry);
+
+    /* @todo check action lists agree */
+
+    /* Same with an external action list */
+
+    TEST_ASSERT((add = of_flow_add_new(OF_VERSION_1_0)) != NULL);
+    TEST_ASSERT((entry = of_flow_stats_entry_new(OF_VERSION_1_0)) != NULL);
+
+    TEST_ASSERT(of_flow_add_OF_VERSION_1_0_populate(add, 1) != 0);
+    TEST_ASSERT(of_flow_add_match_get(add, &m1) == 0);
+
+    list = of_list_action_new(OF_VERSION_1_0);
+    TEST_ASSERT(list != NULL);
+    TEST_ASSERT(of_list_action_OF_VERSION_1_0_populate(list, 1) != 0);
+
+    /* Verify matches agree */
+    TEST_ASSERT(of_flow_stats_entry_setup_from_flow_add(entry, add, list) == 0);
+    TEST_ASSERT(of_flow_stats_entry_match_get(entry, &m2) == 0);
+    TEST_ASSERT(memcmp(&m1, &m2, sizeof(m1)) == 0);
+
+    of_list_action_init(&list_out, OF_VERSION_1_0, 0, 1);
+    of_flow_stats_entry_actions_bind(entry, &list_out);
+
+    /* Verify lists agree */
+    TEST_ASSERT(list->length == list_out.length);
+    TEST_ASSERT(memcmp(WBUF_BUF(list->wire_object.wbuf),
+                       WBUF_BUF(list_out.wire_object.wbuf),
+                       list->length));
+
+    of_flow_add_delete(add);
+    of_list_action_delete(list);
+    of_flow_stats_entry_delete(entry);
+
+    return TEST_PASS;
+}
+
+
+int run_setup_from_add_tests(void)
+{
+    RUN_TEST(removed_setup_from_add);
+    RUN_TEST(stats_entry_setup_from_add);
+
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/test_utils.c b/c_gen/templates/locitest/test_utils.c
new file mode 100644
index 0000000..adc6d42
--- /dev/null
+++ b/c_gen/templates/locitest/test_utils.c
@@ -0,0 +1,128 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ *
+ * Test utility functions
+ *
+ */
+
+#include <locitest/test_common.h>
+#include <loci/of_utils.h>
+
+/**
+ * Test has output port utility function
+ */
+int
+test_has_outport(void)
+{
+    of_list_action_t *list;
+    of_action_t elt;
+    of_action_set_dl_src_t *set_dl_src;
+    of_action_output_t *output;
+
+    set_dl_src = &elt.set_dl_src;
+    output = &elt.output;
+
+    list = of_list_action_new(OF_VERSION_1_0);
+    TEST_ASSERT(list != NULL);
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(!of_action_list_has_out_port(list, 1));
+
+    /* Add some other action */
+    of_action_set_dl_src_init(set_dl_src, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)set_dl_src));
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(!of_action_list_has_out_port(list, 1));
+
+    /* Add port 2 */
+    of_action_output_init(output, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)output));
+    of_action_output_port_set(output, 2);
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(!of_action_list_has_out_port(list, 1));
+    TEST_ASSERT(of_action_list_has_out_port(list, 2));
+
+    /* Add port 1 */
+    of_action_output_init(output, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)output));
+    of_action_output_port_set(output, 1);
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(of_action_list_has_out_port(list, 1));
+    TEST_ASSERT(of_action_list_has_out_port(list, 2));
+
+    /* Start over and put action at front of list */
+    of_list_action_delete(list);
+
+    list = of_list_action_new(OF_VERSION_1_0);
+    TEST_ASSERT(list != NULL);
+
+    /* Add port 2 */
+    of_action_output_init(output, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)output));
+    of_action_output_port_set(output, 2);
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(!of_action_list_has_out_port(list, 1));
+    TEST_ASSERT(of_action_list_has_out_port(list, 2));
+
+    /* Add some other action */
+    of_action_set_dl_src_init(set_dl_src, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)set_dl_src));
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(!of_action_list_has_out_port(list, 1));
+    TEST_ASSERT(of_action_list_has_out_port(list, 2));
+
+    /* Add port 1 */
+    of_action_output_init(output, OF_VERSION_1_0, -1, 1);
+    TEST_OK(of_list_action_append_bind(list, (of_action_t *)output));
+    of_action_output_port_set(output, 1);
+
+    TEST_ASSERT(of_action_list_has_out_port(list, OF_PORT_DEST_WILDCARD));
+    TEST_ASSERT(of_action_list_has_out_port(list, 1));
+    TEST_ASSERT(of_action_list_has_out_port(list, 2));
+
+    of_list_action_delete(list);
+
+    return TEST_PASS;
+}
+
+int
+run_utility_tests(void)
+{
+    RUN_TEST(has_outport);
+    RUN_TEST(dump_objs);
+
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/test_validator.c b/c_gen/templates/locitest/test_validator.c
new file mode 100644
index 0000000..cac67f7
--- /dev/null
+++ b/c_gen/templates/locitest/test_validator.c
@@ -0,0 +1,113 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ * Test message validator
+ *
+ * Run the message validator on corrupt messages to ensure it catches them.
+ */
+
+#include "loci_log.h"
+
+#include <locitest/test_common.h>
+#include <loci/loci_validator.h>
+
+static int
+test_validate_fixed_length(void)
+{
+    of_table_stats_request_t *obj = of_table_stats_request_new(OF_VERSION_1_0);
+    of_message_t msg = OF_OBJECT_TO_MESSAGE(obj);
+
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == 0);
+
+    of_message_length_set(msg, of_message_length_get(msg) - 1);
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == -1);
+
+    of_table_stats_request_delete(obj);
+    return TEST_PASS;
+}
+
+static int
+test_validate_fixed_length_list(void)
+{
+    of_table_stats_reply_t *obj = of_table_stats_reply_new(OF_VERSION_1_0);
+    of_list_table_stats_entry_t list;
+    of_table_stats_entry_t element;
+    of_message_t msg; 
+    of_table_stats_reply_entries_bind(obj, &list);
+    of_table_stats_entry_init(&element, OF_VERSION_1_0, -1, 1);
+    of_list_table_stats_entry_append_bind(&list, &element);
+    of_list_table_stats_entry_append_bind(&list, &element);
+    msg = OF_OBJECT_TO_MESSAGE(obj);
+
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == 0);
+
+    of_message_length_set(msg, of_message_length_get(msg) - 1);
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == -1);
+
+    of_table_stats_reply_delete(obj);
+    return TEST_PASS;
+}
+
+static int
+test_validate_tlv16_list(void)
+{
+    of_flow_modify_t *obj = of_flow_modify_new(OF_VERSION_1_0);
+    of_list_action_t list;
+    of_action_set_tp_dst_t element1;
+    of_action_output_t element2;
+    of_message_t msg; 
+    of_flow_modify_actions_bind(obj, &list);
+    of_action_set_tp_dst_init(&element1, OF_VERSION_1_0, -1, 1);
+    of_list_action_append_bind(&list, (of_action_t *)&element1);
+    of_action_output_init(&element2, OF_VERSION_1_0, -1, 1);
+    of_list_action_append_bind(&list, (of_action_t *)&element2);
+    msg = OF_OBJECT_TO_MESSAGE(obj);
+
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == 0);
+
+    of_message_length_set(msg, of_message_length_get(msg) - 1);
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == -1);
+
+    of_message_length_set(msg, of_message_length_get(msg) + 2);
+    TEST_ASSERT(of_validate_message(msg, of_message_length_get(msg)) == -1);
+
+    of_flow_modify_delete(obj);
+    return TEST_PASS;
+}
+
+int
+run_validator_tests(void)
+{
+    RUN_TEST(validate_fixed_length);
+    RUN_TEST(validate_fixed_length_list);
+    RUN_TEST(validate_tlv16_list);
+
+    return TEST_PASS;
+}
diff --git a/c_gen/templates/locitest/unittest.h b/c_gen/templates/locitest/unittest.h
new file mode 100644
index 0000000..7ffc413
--- /dev/null
+++ b/c_gen/templates/locitest/unittest.h
@@ -0,0 +1,56 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+#ifndef UNITTEST_H
+#define UNITTEST_H
+
+#include <stdio.h>
+
+#define TEST_PASS 1
+#define TEST_FAIL 0
+
+#define TESTCASE(foo, rv) do {                                          \
+        fprintf(stderr, "test %s:", #foo);                              \
+        fprintf(stderr, "  %s\n", (rv = test_##foo()) ? "PASS" : "FAIL"); \
+    } while (0)
+
+#define TEST_ASSERT(result) if (!(result)) do {                         \
+        fprintf(stderr, "\nTEST ASSERT FAILURE "                        \
+               #result " :: %s:%d\n",__FILE__,__LINE__);                \
+        ASSERT(0);                                                      \
+        return TEST_FAIL;                                               \
+    } while (0)
+
+#define TEST_ASSERT_EQUAL(expected, result) \
+    TEST_ASSERT(((expected) == (result)))
+
+#define TEST_ASSERT_NOT_EQUAL(expected, result) \
+    TEST_ASSERT(((expected) != (result)))
+
+#endif
diff --git a/c_gen/templates/of_buffer.h b/c_gen/templates/of_buffer.h
new file mode 100644
index 0000000..05cc587
--- /dev/null
+++ b/c_gen/templates/of_buffer.h
@@ -0,0 +1,139 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ *
+ * Low level buffer manipulation functions
+ * Requires low level type defs
+ *
+ ****************************************************************/
+
+#ifndef _OF_BUFFER_H_
+#define _OF_BUFFER_H_
+
+#include <loci/loci_base.h>
+#include <stdio.h>
+
+/* Function type used for a "free" operation of a low level buffer */
+typedef void (*of_buffer_free_f)(void *buf);
+
+
+/****************************************************************
+ *
+ * Low level buffers accessors handling endian and alignment
+ *
+ ****************************************************************/
+
+static inline void
+buf_ipv6_get(uint8_t *buf, of_ipv6_t *val) {
+    MEMCPY(val, buf, sizeof(*val));
+    IPV6_NTOH(val, val); /* probably a no-op */
+}
+
+static inline void
+buf_mac_get(uint8_t *buf, of_mac_addr_t *val) {
+    MEMCPY(val, buf, sizeof(*val));
+}
+
+static inline void
+buf_u64_get(uint8_t *buf, uint64_t *val) {
+    MEMCPY(val, buf, sizeof(*val));
+    *val = U64_NTOH(*val);
+}
+
+static inline void
+buf_u32_get(uint8_t *buf, uint32_t *val) {
+    MEMCPY(val, buf, sizeof(*val));
+    *val = U32_NTOH(*val);
+}
+
+static inline void
+buf_u16_get(uint8_t *buf, uint16_t *val) {
+    MEMCPY(val, buf, sizeof(uint16_t));
+    *val = U16_NTOH(*val);
+}
+
+static inline void
+buf_u8_get(uint8_t *buf, uint8_t *val) {
+    *val = *buf;
+}
+
+static inline void
+buf_octets_get(uint8_t *buf, uint8_t *dst, int bytes) {
+    MEMCPY(dst, buf, bytes);
+}
+
+static inline void
+buf_ipv6_set(uint8_t *buf, of_ipv6_t *val) {
+    /* FIXME:  If this is not a NO-OP, need to change the code */
+#if 1
+    MEMCPY(buf, val, sizeof(*val));
+#else
+    of_ipv6_t w_val;
+    MEMCPY(&w_val, val sizeof(w_val));
+    IPV6_HTON(w_val, w_val);
+    MEMCPY(buf, &w_val, sizeof(w_val));
+#endif
+}
+
+static inline void
+buf_mac_addr_set(uint8_t *buf, of_mac_addr_t *val) {
+    MEMCPY(buf, val, sizeof(of_mac_addr_t));
+}
+
+static inline void
+buf_u64_set(uint8_t *buf, uint64_t val) {
+    val = U64_HTON(val);
+    MEMCPY(buf, &val, sizeof(uint64_t));
+}
+
+static inline void
+buf_u32_set(uint8_t *buf, uint32_t val) {
+    val = U32_HTON(val);
+    MEMCPY(buf, &val, sizeof(uint32_t));
+}
+
+static inline void
+buf_u16_set(uint8_t *buf, uint16_t val) {
+    val = U16_NTOH(val);
+    MEMCPY(buf, &val, sizeof(uint16_t));
+}
+
+static inline void
+buf_u8_set(uint8_t *buf, uint8_t val) {
+    *buf = val;
+}
+
+static inline void
+buf_octets_set(uint8_t *buf, uint8_t *src, int bytes) {
+    MEMCPY(buf, src, bytes);
+}
+
+
+#endif /* _OF_BUFFER_H_ */
diff --git a/c_gen/templates/of_doc.h b/c_gen/templates/of_doc.h
new file mode 100644
index 0000000..9e352f0
--- /dev/null
+++ b/c_gen/templates/of_doc.h
@@ -0,0 +1,439 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/**
+ * @file of_doc.h
+ * @brief Documentation of sample functions
+ *
+ * This file is for documentation purposes only
+ *
+ * Once the code is in a working state, this documentation will be 
+ * transfered to the actual source files
+ *
+ * The functions documented here are simple templates for accessors that
+ * are used for all accessor members of LOCI objects.  Data types are
+ * divided into:
+ *
+ * @li @c scalar Things like uint8_t, uint16_t, of_mac_addr_t as well as
+ * fixed length strings
+ * @li @c octets Arbitrary length strings of binary data
+ * @li @c composite Data members which are structures
+ * @li @c list The list abstraction for data members
+ * @li @c list_element An element of a list (usually a composite object)
+ *
+ * List elements get special treatment for iterating across a list or 
+ * appending new entries to the list.  All others have 'set' and 'get'
+ * operations.  
+ *
+ * Scalars operations are "stateless" in that they simply
+ * update the underlying data or return that data.  
+ *
+ * Composites and list members update structures to point to the
+ * underlying data.  As a result, care must be taken that objects are
+ * not freed when linked composite or list members remain referring to
+ * the underlying data structure.  Currently: Note that reference
+ * counting won't solve this with the current approach as the
+ * referring objects may be automatic and not subject to alloc/free
+ * operations.
+ */
+
+
+
+/**
+ * Generic documentation for scalar get methods
+ * @param obj The object being accessed
+ * @param value A pointer to a scalar instance of the proper type
+ * @return OF_ERROR_XXX
+ *
+ * Accesses the underlying wire buffer at the appropriate offset to
+ * retrieve and convert the value, placing the result in *value.
+ *
+ * Examples of scalar types include:
+ * @li uint8_t
+ * @li uint16_t
+ * @li uint32_t
+ * @li uint64_t
+ * @li of_mac_addr_t
+ *
+ * Examples calls include:
+ * @li rv = of_port_status_reason_get(obj, &reason)
+ * @li rv = of_table_mod_config_get(obj, &config)
+ *
+ * An object instance can call directly as:
+ * @li rv = obj->reason_get(obj, &reason);    obj is an of_table_mod_t *
+ * @li rv = obj->config_get(obj, &config);    obj is an of_table_mod_t *
+ */
+extern int of_object_scalar_member_get(of_object_t *obj, uint32_t *value);
+
+/**
+ * Generic documentation for scalar set methods
+ * @param obj The object being accessed
+ * @param value Call by value parameter with the value to set
+ * @return OF_ERROR_XXX
+ *
+ * Converts value to the proper wire representation and writes it to
+ * the underlying wire buffer at the appropriate offset.
+ */
+extern int of_object_scalar_member_set(of_object_t *obj, uint32_t value);
+
+/**
+ * Generic documentation for an octets data get method
+ * @param obj The object being accessed
+ * @param value A pointer to an of_octets_t structure to be filled out 
+ * @return OF_ERROR_XXX
+ *
+ * NOTE: 
+ * Sets *bytes to the number of bytes in the octet string.
+ */
+extern int of_object_octets_member_data_get(of_object_t *obj, of_octets_t *value);
+
+/**
+ * Generic documentation for an octets data set method
+ * @param obj The object being accessed
+ * @param value Pointer to an of_octets_t structure pointing to memory from
+ * which data will be copied to wire buffer
+ * @return OF_ERROR_XXX
+ *
+ * Copies data from the memory pointed to by value into the underlying 
+ * wire buffer, extending the buffer if necessary.  The length of obj 
+ * is updated.
+ *
+ * If the length of obj changes (because the existing octets instance is of
+ * a different length) its internal length accessor is called to update
+ * anything tracking its length.  This may call the object's parent
+ * length accessor with the update.
+ */
+extern int of_object_octets_member_data_set(of_object_t *obj, of_octets_t *value);
+
+/**
+ * Generic documentation for an octets pointer get method
+ * @param obj The object being accessed
+ * @param value Pointer to an of_octets_t structure to be initialized
+ * @return OF_ERROR_XXX
+ *
+ * Set the octets object *value to point to the underlying data buffer.  The
+ * length member of *value is set as well.
+ *
+ * The result should be treated as READ ONLY information as modifying
+ * the buffer could cause corruption of the underlying LOCI object.
+ * To change the value (especially the length) of an octets data member,
+ * allocate and set a memory buffer and use the octets_member_data_set 
+ * function.
+ */
+extern int of_object_octets_member_ptr_get(of_object_t *obj, of_octets_t *value);
+
+/**
+ * Generic documentation for a composite sub-object get method
+ * @param obj The object being accessed
+ * @param value Pointer to an object (the sub-object) of the proper type
+ * @return OF_ERROR_XXX
+ *
+ * A composite is a structure with multiple components.  On a 'get'
+ * operation, a pointer (value) to an unused instance of the appropriate 
+ * type is passed to this routine.  That instance is intialized with the
+ * proper accessor functions for the sub-object type and the wire object
+ * is set up to point to obj's wire buffer at the appropriate offset.
+ *
+ * If changes are made to the sub-object (*value) and those changes
+ * affect the length, then the corresponding composite_set must be
+ * called to commit those changes to the parent.
+ */
+extern int of_object_composite_member_get(of_object_t *obj, 
+                                          of_object_t *value);
+
+/**
+ * Generic documentation for a composite sub-object set method
+ * @param obj The object being accessed
+ * @param value Pointer to an object (the sub-object) of the proper type
+ * @return OF_ERROR_XXX
+ *
+ * A composite is a structure with multiple components.  On a 'set'
+ * operation, a pointer (value) to an instance of the appropriate type
+ * is passed to this routine.  
+ *
+ * If the parent (obj) and the child (value) point to the same underlying
+ * wire object, then the changes to the underlying buffer are already
+ * recorded.
+ *
+ * Otherwise, any existing value of the sub-object is replaced
+ * by copying the wire buffer contents from value's wire buffer to obj's
+ * wire buffer at the appropriate offset.
+ *
+ * In either case, the length of the parent will be updated if it changes.
+ */
+extern int of_object_composite_member_set(of_object_t *obj, 
+                                          of_object_t *value);
+
+/**
+ * Generic documentation for a list sub-object get method
+ * @param obj The object being accessed
+ * @param value Pointer to an object (the list sub-object)
+ * @return OF_ERROR_XXX
+ *
+ * A list is an array of instances of objects of a common (possibly
+ * polymorphic) type.  On a 'get' operation, a pointer (value) to an
+ *  unused instance of the appropriate list type is passed to this
+ * routine.  That instance is intialized with the proper accessor
+ * functions for the list type and the wire object is set up to point
+ * to obj's wire buffer at the appropriate offset.
+ *
+ * Currently, the list object returned by a 'get' operation should not
+ * be altered, although changes that do not affect the length of
+ * sub-objects will work.
+ *
+ * Rather, lists should either be cleared and set from completely new
+ * instances using 'list_set', or they may be built using the append
+ * operations described below.
+ *
+ * @sa of_object_list_append_bind
+ * @sa of_object_list_append
+ */
+extern int of_object_list_member_get(of_object_t *obj, 
+                                     of_list_object_t *value);
+
+/**
+ * Generic documentation for a list entry first call
+ * @param obj The list object being accessed
+ * @param value Pointer to a generic list entry instance
+ * @return OF_ERROR_RANGE if the list is empty
+ *
+ * A list is an array of instances of objects of a common (possibly
+ * polymorphic) type.  
+ *
+ * This routine is intended for iterating through a list for
+ * reading.  Normally a 'get' operation will be done on the parent
+ * of the list to retrieve (a pointer to) the list, and then this routine
+ * will be called to set up a generic entry representing the first 
+ * element of the list.
+ *
+ * @sa of_object_list_entry_next
+ */
+extern int of_object_list_entry_first(of_list_object_t *obj, 
+                                      of_object_t *value);
+
+/**
+ * Generic documentation for a list entry next call
+ * @param obj The list object being accessed
+ * @param value Pointer to a generic list entry instance
+ * @return OF_ERROR_RANGE if the value already points to the last item 
+ * in the list
+
+ * A list is an array of instances of objects of a common (possibly
+ * polymorphic) type.  
+ *
+ * The corresponding list_entry_first must be called on the pair of
+ * parameters initially.  There after, this routine will advance the
+ * pointers in the wire object of value to the subsequent entry in the
+ * list.
+ *
+ * Changes should not be made to list items using these routines,
+ * although 'set' operations which do not change the length of the
+ * instance will work.
+ *
+ * @sa of_object_list_entry_first
+ */
+extern int of_object_list_entry_next(of_list_object_t *obj, 
+                                     of_object_t *value);
+
+/**
+ * Generic documentation for a list append bind function
+ * @param obj The list object being accessed
+ * @param value Pointer to a generic list entry instance
+ * @return OF_ERROR_XXX
+ *
+ * A list is an array of instances of objects of a common (possibly
+ * polymorphic) type.
+ *
+ * This function prepares the list for the process of appending an
+ * item to its tail.  The parameter value is a pointer to a generic
+ * list entry instance.  Its wire buffer is bound to that of the list
+ * at the end of the list.  The length of the list is updated according
+ * to the current value setting which must be accurate.
+ *
+ * Note that since the child has not been bound to a buffer, no
+ * values have been properly recorded for the object; they must
+ * be set after this _bind call.
+ *
+ * @sa of_object_list_entry_append
+ */
+extern int of_object_list_entry_append_bind(of_list_object_t *obj, 
+                                            of_object_t *value);
+
+/**
+ * Generic documentation for a list append function
+ * @param obj The list object being accessed
+ * @param value Pointer to a list object to be appended
+ * @return OF_ERROR_XXX
+ *
+ * A list is an array of instances of objects of a common (possibly
+ * polymorphic) type.
+ *
+ * This function takes a fully formed list entry, value, and copies
+ * that value to the end of the list.  No object ownership changes
+ * with this call.
+ *
+ * @sa of_object_list_entry_append_bind
+ */
+extern int of_object_list_entry_append(of_list_object_t *obj, 
+                                       of_object_t *value);
+
+/* This is from loci.h */
+
+/**
+ * Inheritance super class for of_queue_prop
+ *
+ * This class is the union of of_queue_prop classes.  You can refer
+ * to it untyped by refering to the member 'header' whose structure
+ * is common across all sub-classes.
+ */
+
+union of_queue_prop_u {
+    of_queue_prop_header_t header; /* Generic instance */
+    of_queue_prop_min_rate_t min_rate;
+    of_queue_prop_max_rate_t max_rate;
+    of_queue_prop_experimenter_t experimenter;
+};
+
+/**
+ * Inheritance super class for of_action
+ *
+ * This class is the union of of_action classes.  You can refer
+ * to it untyped by refering to the member 'header' whose structure
+ * is common across all sub-classes.
+ */
+
+union of_action_u {
+    of_action_header_t header; /* Generic instance */
+    of_action_copy_ttl_out_t copy_ttl_out;
+    of_action_set_mpls_tc_t set_mpls_tc;
+    of_action_set_field_t set_field;
+    of_action_set_nw_tos_t set_nw_tos;
+    of_action_dec_mpls_ttl_t dec_mpls_ttl;
+    of_action_set_nw_dst_t set_nw_dst;
+    of_action_set_mpls_label_t set_mpls_label;
+    of_action_group_t group;
+    of_action_set_nw_src_t set_nw_src;
+    of_action_set_vlan_vid_t set_vlan_vid;
+    of_action_set_mpls_ttl_t set_mpls_ttl;
+    of_action_pop_vlan_t pop_vlan;
+    of_action_set_tp_dst_t set_tp_dst;
+    of_action_pop_mpls_t pop_mpls;
+    of_action_push_vlan_t push_vlan;
+    of_action_set_vlan_pcp_t set_vlan_pcp;
+    of_action_enqueue_t enqueue;
+    of_action_set_tp_src_t set_tp_src;
+    of_action_experimenter_t experimenter;
+    of_action_set_nw_ttl_t set_nw_ttl;
+    of_action_copy_ttl_in_t copy_ttl_in;
+    of_action_set_nw_ecn_t set_nw_ecn;
+    of_action_strip_vlan_t strip_vlan;
+    of_action_set_dl_dst_t set_dl_dst;
+    of_action_push_mpls_t push_mpls;
+    of_action_dec_nw_ttl_t dec_nw_ttl;
+    of_action_set_dl_src_t set_dl_src;
+    of_action_set_queue_t set_queue;
+    of_action_output_t output;
+};
+
+/**
+ * Inheritance super class for of_instruction
+ *
+ * This class is the union of of_instruction classes.  You can refer
+ * to it untyped by refering to the member 'header' whose structure
+ * is common across all sub-classes.
+ */
+
+union of_instruction_u {
+    of_instruction_header_t header; /* Generic instance */
+    of_instruction_clear_actions_t clear_actions;
+    of_instruction_write_actions_t write_actions;
+    of_instruction_goto_table_t goto_table;
+    of_instruction_apply_actions_t apply_actions;
+    of_instruction_experimenter_t experimenter;
+    of_instruction_write_metadata_t write_metadata;
+};
+
+/**
+ * Inheritance super class for of_oxm
+ *
+ * This class is the union of of_oxm classes.  You can refer
+ * to it untyped by refering to the member 'header' whose structure
+ * is common across all sub-classes.
+ */
+
+union of_oxm_u {
+    of_oxm_header_t header; /* Generic instance */
+    of_oxm_ipv6_flabel_t ipv6_flabel;
+    of_oxm_ipv6_dst_masked_t ipv6_dst_masked;
+    of_oxm_vlan_pcp_t vlan_pcp;
+    of_oxm_ipv4_src_t ipv4_src;
+    of_oxm_ipv6_dst_t ipv6_dst;
+    of_oxm_arp_tpa_t arp_tpa;
+    of_oxm_icmpv6_type_t icmpv6_type;
+    of_oxm_arp_sha_t arp_sha;
+    of_oxm_ipv6_src_t ipv6_src;
+    of_oxm_sctp_src_t sctp_src;
+    of_oxm_icmpv6_code_t icmpv6_code;
+    of_oxm_metadata_masked_t metadata_masked;
+    of_oxm_eth_src_masked_t eth_src_masked;
+    of_oxm_eth_dst_t eth_dst;
+    of_oxm_ipv6_nd_sll_t ipv6_nd_sll;
+    of_oxm_mpls_tc_t mpls_tc;
+    of_oxm_arp_op_t arp_op;
+    of_oxm_vlan_vid_masked_t vlan_vid_masked;
+    of_oxm_eth_type_t eth_type;
+    of_oxm_in_phy_port_t in_phy_port;
+    of_oxm_vlan_vid_t vlan_vid;
+    of_oxm_arp_tha_t arp_tha;
+    of_oxm_arp_tpa_masked_t arp_tpa_masked;
+    of_oxm_in_port_t in_port;
+    of_oxm_ip_dscp_t ip_dscp;
+    of_oxm_sctp_dst_t sctp_dst;
+    of_oxm_icmpv4_code_t icmpv4_code;
+    of_oxm_eth_dst_masked_t eth_dst_masked;
+    of_oxm_tcp_src_t tcp_src;
+    of_oxm_ip_ecn_t ip_ecn;
+    of_oxm_ipv6_src_masked_t ipv6_src_masked;
+    of_oxm_ipv4_src_masked_t ipv4_src_masked;
+    of_oxm_udp_dst_t udp_dst;
+    of_oxm_arp_spa_t arp_spa;
+    of_oxm_ipv6_nd_target_t ipv6_nd_target;
+    of_oxm_ipv4_dst_t ipv4_dst;
+    of_oxm_ipv4_dst_masked_t ipv4_dst_masked;
+    of_oxm_eth_src_t eth_src;
+    of_oxm_udp_src_t udp_src;
+    of_oxm_ipv6_nd_tll_t ipv6_nd_tll;
+    of_oxm_icmpv4_type_t icmpv4_type;
+    of_oxm_mpls_label_t mpls_label;
+    of_oxm_tcp_dst_t tcp_dst;
+    of_oxm_ip_proto_t ip_proto;
+    of_oxm_metadata_t metadata;
+    of_oxm_arp_spa_masked_t arp_spa_masked;
+};
+
diff --git a/c_gen/templates/of_message.h b/c_gen/templates/of_message.h
new file mode 100644
index 0000000..77066bc
--- /dev/null
+++ b/c_gen/templates/of_message.h
@@ -0,0 +1,252 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/*
+ * These routines manipulate a low level buffer assuming it holds
+ * an OpenFlow message. 
+ */
+
+#if !defined(_OF_MESSAGE_H_)
+#define _OF_MESSAGE_H_
+
+#include <loci/of_buffer.h>
+
+typedef uint8_t *of_message_t;
+
+/* A few key common header offsets */
+#define OF_MESSAGE_VERSION_OFFSET 0
+#define OF_MESSAGE_TYPE_OFFSET 1
+#define OF_MESSAGE_LENGTH_OFFSET 2
+#define OF_MESSAGE_XID_OFFSET 4
+#define OF_MESSAGE_HEADER_LENGTH 8
+#define OF_MESSAGE_STATS_TYPE_OFFSET 8
+#define OF_MESSAGE_FLOW_MOD_COMMAND_OFFSET(version) ((version) == 1 ? 56 : 25)
+
+#define OF_MESSAGE_MIN_LENGTH 8
+#define OF_MESSAGE_MIN_STATS_LENGTH (OF_MESSAGE_STATS_TYPE_OFFSET + 2)
+#define OF_MESSAGE_MIN_FLOW_MOD_LENGTH(version)  ((version) == 1 ? 57 : 26)
+
+#define OF_MESSAGE_EXPERIMENTER_ID_OFFSET 8
+#define OF_MESSAGE_EXPERIMENTER_SUBTYPE_OFFSET 12
+#define OF_MESSAGE_EXPERIMENTER_MIN_LENGTH 16
+
+/**
+ * The "default" free message function; NULL means use nominal malloc/free
+ */
+#define OF_MESSAGE_FREE_FUNCTION NULL
+
+/**
+ * Map a message to the uint8_t * start of the message
+ */
+#define OF_MESSAGE_TO_BUFFER(msg) ((uint8_t *)(msg))
+
+/**
+ * Map a uint8_t * to a message object
+ */
+#define OF_BUFFER_TO_MESSAGE(buf) ((of_message_t)(buf))
+
+/****************************************************************
+ *
+ * Message field accessors.
+ *
+ * These do no range checking and assume a buffer with sufficient
+ * length to access the data.  These are low level accessors used
+ * during the parsing and coersion stage of message processing.
+ *
+ * Fields include:  version, message type, message length,
+ * transaction id, stats type (now multi-part type), experimenter id,
+ * experimenter type
+ *
+ ****************************************************************/
+
+/**
+ * @brief Get/set version of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param version Data for set operation
+ * @returns get returns version
+ */
+
+static inline of_version_t
+of_message_version_get(of_message_t msg) {
+    return (of_version_t)msg[OF_MESSAGE_VERSION_OFFSET];
+}
+
+static inline void
+of_message_version_set(of_message_t msg, of_version_t version) {
+    buf_u8_set(msg, (uint8_t)version);
+}
+
+/**
+ * @brief Get/set OpenFlow type of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param value Data for set operation
+ * @returns get returns message type
+ */
+
+static inline uint8_t
+of_message_type_get(of_message_t msg) {
+    return msg[OF_MESSAGE_TYPE_OFFSET];
+}
+
+static inline void
+of_message_type_set(of_message_t msg, uint8_t value) {
+    buf_u8_set(msg + OF_MESSAGE_TYPE_OFFSET, value);
+}
+
+/**
+ * @brief Get/set in-buffer length of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param len Data for set operation
+ * @returns get returns length in host order
+ */
+
+static inline uint16_t
+of_message_length_get(of_message_t msg) {
+    uint16_t val;
+    buf_u16_get(msg + OF_MESSAGE_LENGTH_OFFSET, &val);
+    return val;
+}
+
+static inline void
+of_message_length_set(of_message_t msg, uint16_t len) {
+    buf_u16_set(msg + OF_MESSAGE_LENGTH_OFFSET, len);
+}
+
+
+/**
+ * @brief Get/set transaction ID of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param xid Data for set operation
+ * @returns get returns xid in host order
+ */
+
+static inline uint32_t
+of_message_xid_get(of_message_t msg) {
+    uint32_t val;
+    buf_u32_get(msg + OF_MESSAGE_XID_OFFSET, &val);
+    return val;
+}
+
+static inline void
+of_message_xid_set(of_message_t msg, uint32_t xid) {
+    buf_u32_set(msg + OF_MESSAGE_XID_OFFSET, xid);
+}
+
+/**
+ * @brief Get/set stats type of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param type Data for set operation
+ * @returns get returns stats type in host order
+ */
+
+static inline uint16_t
+of_message_stats_type_get(of_message_t msg) {
+    uint16_t val;
+    buf_u16_get(msg + OF_MESSAGE_STATS_TYPE_OFFSET, &val);
+    return val;
+}
+
+static inline void
+of_message_stats_type_set(of_message_t msg, uint16_t type) {
+    buf_u16_set(msg + OF_MESSAGE_STATS_TYPE_OFFSET, type);
+}
+
+
+/**
+ * @brief Get/set experimenter ID of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param experimenter_id Data for set operation
+ * @returns get returns experimenter id in host order
+ */
+
+static inline uint32_t
+of_message_experimenter_id_get(of_message_t msg) {
+    uint32_t val;
+    buf_u32_get(msg + OF_MESSAGE_EXPERIMENTER_ID_OFFSET, &val);
+    return val;
+}
+
+static inline void
+of_message_experimenter_id_set(of_message_t msg, uint32_t experimenter_id) {
+    buf_u32_set(msg + OF_MESSAGE_EXPERIMENTER_ID_OFFSET, experimenter_id);
+}
+
+
+/**
+ * @brief Get/set experimenter message type (subtype) of a message
+ * @param msg Pointer to the message buffer of sufficient length
+ * @param subtype Data for set operation
+ * @returns get returns experimenter message type in host order
+ */
+
+static inline uint32_t
+of_message_experimenter_subtype_get(of_message_t msg) {
+    uint32_t val;
+    buf_u32_get(msg + OF_MESSAGE_EXPERIMENTER_SUBTYPE_OFFSET, &val);
+    return val;
+}
+
+static inline void
+of_message_experimenter_subtype_set(of_message_t msg,
+                                    uint32_t subtype) {
+    buf_u32_set(msg + OF_MESSAGE_EXPERIMENTER_SUBTYPE_OFFSET,
+                subtype);
+}
+
+/**
+ * Flow mod command changed from 16 to 8 bits on the wire from 1.0 to 1.1
+ */
+static inline uint8_t
+of_message_flow_mod_command_get(of_message_t msg, of_version_t version) {
+    uint16_t val16;
+    uint8_t val8;
+
+    if (version == OF_VERSION_1_0) {
+        buf_u16_get(msg + OF_MESSAGE_FLOW_MOD_COMMAND_OFFSET(version), &val16);
+        val8 = val16;
+    } else {
+        buf_u8_get(msg + OF_MESSAGE_FLOW_MOD_COMMAND_OFFSET(version), &val8);
+    }
+    return val8;
+}
+
+static inline void
+of_message_flow_mod_command_set(of_message_t msg, of_version_t version, 
+                                uint8_t command) {
+    uint16_t val16;
+
+    if (version == OF_VERSION_1_0) {
+        val16 = command;
+        buf_u16_set(msg + OF_MESSAGE_FLOW_MOD_COMMAND_OFFSET(version), val16);
+    } else {
+        buf_u8_set(msg + OF_MESSAGE_FLOW_MOD_COMMAND_OFFSET(version), command);
+    }
+}
+
+#endif /* _OF_MESSAGE_H_ */
diff --git a/c_gen/templates/of_object.c b/c_gen/templates/of_object.c
new file mode 100644
index 0000000..a5cfecd
--- /dev/null
+++ b/c_gen/templates/of_object.c
@@ -0,0 +1,728 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ *
+ * of_object.c
+ *
+ * These are the low level object constructor/destructor operators.
+ *
+ ****************************************************************/
+
+#include "loci_log.h"
+#include <loci/loci.h>
+#include <loci/loci_validator.h>
+
+#if defined(OF_OBJECT_TRACKING)
+#include <BigList/biglist.h>
+
+loci_object_track_t loci_global_tracking;
+
+#define TRACK (&loci_global_tracking)
+#define TRACK_OBJS (TRACK->objects)
+#define CHECK_MAX(val, max) if ((val) > (max)) (max) = (val)
+
+#endif
+
+/**
+ * Create a generic new object and possibly underlying wire buffer
+ * @param bytes The number of bytes to allocate in the underlying buffer
+ *
+ * If bytes <= 0, do not allocate a wire buffer.
+ *
+ * Note that this is an internal function.  The class specific
+ * new functions should be called to properly initialize and track an
+ * OF object.
+ */
+
+of_object_t *
+of_object_new(int bytes)
+{
+    of_object_t *obj;
+
+    if ((obj = (of_object_t *)MALLOC(sizeof(of_generic_t))) == NULL) {
+        return NULL;
+    }
+    MEMSET(obj, 0, sizeof(of_generic_t));
+
+    if (bytes > 0) {
+        if ((obj->wire_object.wbuf = of_wire_buffer_new(bytes)) == NULL) {
+            FREE(obj);
+            return NULL;
+        }
+        obj->wire_object.owned = 1;
+    }
+
+    return obj;
+}
+
+/**
+ * The delete function for LOCI objects
+ *
+ * @param obj Pointer to the object to be deleted
+ *
+ * This can be called on any LOCI object; it should not need to be
+ * overridden.
+ */
+
+void
+of_object_delete(of_object_t *obj)
+{
+    if (obj == NULL) {
+        return;
+    }
+
+#if defined(OF_OBJECT_TRACKING)
+    ASSERT(obj->track_info.magic == OF_OBJECT_TRACKING_MAGIC &&
+           "of_object double free?");
+    LOCI_LOG_TRACE("OF obj delete %p.  Wire buf %p.\n", obj,
+                   obj->wire_object.wbuf);
+    ASSERT(TRACK->count_current > 0);
+    TRACK->count_current -= 1;
+    TRACK->deletes += 1;
+
+    TRACK_OBJS = biglist_remove_link_free(TRACK_OBJS,
+                                          obj->track_info.bl_entry);
+    obj->track_info.magic = 0;
+#endif
+
+    /*
+     * Make callback if present
+     */
+    if (obj->track_info.delete_cb != NULL) {
+        obj->track_info.delete_cb(obj);
+    }
+
+    if (obj->wire_object.owned) {
+        of_wire_buffer_free(obj->wire_object.wbuf);
+    }
+
+    FREE(obj);
+}
+
+/**
+ * Duplicate an object
+ * @param src The object to be duplicated
+ * @returns Pointer to the duplicate or NULL on error.  Caller is responsible
+ * for freeing the returned object.
+ */
+
+of_object_t *
+of_object_dup_(of_object_t *src)
+{
+    of_object_t *dst;
+    of_object_init_f init_fn;
+
+    if ((dst = (of_object_t *)MALLOC(sizeof(of_generic_t))) == NULL) {
+        return NULL;
+    }
+
+    MEMSET(dst, 0, sizeof(*dst));
+
+    /* Allocate a minimal wire buffer assuming we will not write to it. */
+    if ((dst->wire_object.wbuf = of_wire_buffer_new(src->length)) == NULL) {
+        FREE(dst);
+        return NULL;
+    }
+
+    dst->wire_object.owned = 1;
+
+    init_fn = of_object_init_map[src->object_id];
+    init_fn(dst, src->version, src->length, 0);
+
+    MEMCPY(OF_OBJECT_BUFFER_INDEX(dst, 0),
+           OF_OBJECT_BUFFER_INDEX(src, 0),
+           src->length);
+
+    return dst;
+}
+
+#if defined(OF_OBJECT_TRACKING)
+
+/**
+ * Record an object for tracking
+ *
+ * @param obj The object being tracked
+ * @param file The file name where the allocation is happening
+ * @param line The line number in the file where the alloc is happening
+ */
+
+void
+of_object_track(of_object_t *obj, const char *file, int line)
+{
+    if (obj != NULL) {
+        LOCI_LOG_TRACE("OF obj track %p, wire buf %p\n%s:%d\\n",
+                      obj, obj->wire_object.wbuf, file, line);
+        obj->track_info.file = file;
+        obj->track_info.line = line;
+        TRACK_OBJS = biglist_prepend(TRACK_OBJS, (void *)obj);
+        obj->track_info.bl_entry = TRACK_OBJS;
+        obj->track_info.magic = OF_OBJECT_TRACKING_MAGIC;
+
+        TRACK->allocs += 1;
+        TRACK->count_current += 1;
+        CHECK_MAX(TRACK->count_current, TRACK->count_max);
+    }
+}
+
+/**
+ * The dup function when tracking is enabled
+ */
+
+of_object_t *
+of_object_dup_tracking(of_object_t *src, const char *file, int line)
+{
+    of_object_t *obj;
+
+    obj = of_object_dup_(src);
+    of_object_track(obj, file, line);
+
+    return obj;
+}
+
+/**
+ * Display track info for one object
+ */
+
+void
+of_object_track_output(of_object_t *obj, loci_writer_f writer, void* cookie)
+{
+    const char *offset;
+    static const char *unknown = "Unknown file";
+
+    if (obj->track_info.file) {
+        offset = strstr(obj->track_info.file, "Modules/");
+        if (offset == NULL) {
+            offset = obj->track_info.file;
+        } else {
+            offset += 8; /* Jump over Modules/ too */
+        }
+    } else {
+        offset = unknown;
+    }
+    writer(cookie, "obj %p. type %s.\n%s:%d\n",
+               obj, of_object_id_str[obj->object_id],
+               offset, obj->track_info.line);
+}
+
+/**
+ * Dump out the current object list from LOCI
+ *
+ * @param log_fn The output printf vector
+ *
+ */
+
+void
+of_object_track_report(loci_writer_f writer, void* cookie)
+{
+    biglist_t *elt;
+    of_object_t *obj;
+    int count = 0;
+
+    writer(cookie, "\nLOCI Outstanding object list.\n");
+    writer(cookie, "Objs: Current %d. Max %d. Created %d. Deleted %d\n",
+               TRACK->count_current, TRACK->count_max, TRACK->allocs,
+               TRACK->deletes);
+    if (TRACK_OBJS) {
+        BIGLIST_FOREACH_DATA(elt, TRACK_OBJS, of_object_t *, obj) {
+            of_object_track_output(obj, writer, cookie);
+            ++count;
+        }
+    }
+    if (count != TRACK->count_current) {
+        writer(cookie, "\nERROR:  List has %d, but track count is %d\n",
+                   count, TRACK->count_current);
+    }
+    writer(cookie, "\nEnd of outstanding object list\n");
+}
+
+#endif
+
+/**
+ * Generic new from message call
+ */
+
+of_object_t *
+of_object_new_from_message(of_message_t msg, int len)
+{
+    of_object_id_t object_id;
+    of_object_t *obj;
+    of_version_t version;
+
+    version = of_message_version_get(msg);
+    if (!OF_VERSION_OKAY(version)) {
+        return NULL;
+    }
+
+    if (of_validate_message(msg, len) != 0) {
+        LOCI_LOG_ERROR("message validation failed\n");
+        return NULL;
+    }
+
+    object_id = of_message_to_object_id(msg, len);
+    ASSERT(object_id != OF_OBJECT_INVALID);
+
+    if ((obj = of_object_new(-1)) == NULL) {
+        return NULL;
+    }
+
+    of_object_init_map[object_id](obj, version, 0, 0);
+
+    if (of_object_buffer_bind(obj, OF_MESSAGE_TO_BUFFER(msg), len, 
+                              OF_MESSAGE_FREE_FUNCTION) < 0) {
+        FREE(obj);
+        return NULL;
+    }
+    obj->length = len;
+    obj->version = version;
+
+#if defined(OF_OBJECT_TRACKING)
+    /* @FIXME Would be nice to get caller; for now only in cxn_instance */
+    of_object_track(obj, __FILE__, __LINE__);
+#endif
+
+    return obj;
+}
+
+/**
+ * Bind an existing buffer to an LOCI object
+ *
+ * @param obj Pointer to the object to be updated
+ * @param buf Pointer to the buffer to bind to obj
+ * @param bytes Length of buf
+ * @param buf_free An optional free function to be applied to
+ * buf on deallocation
+ *
+ * This can be called on any LOCI object; it should not need to be
+ * overridden.
+ */
+
+int
+of_object_buffer_bind(of_object_t *obj, uint8_t *buf, int bytes, 
+                      of_buffer_free_f buf_free)
+{
+    of_wire_object_t *wobj;
+    of_wire_buffer_t *wbuf;
+
+    ASSERT(buf != NULL);
+    ASSERT(bytes > 0);
+    // ASSERT(wobj is not bound);
+
+    wobj = &obj->wire_object;
+    MEMSET(wobj, 0, sizeof(*wobj));
+
+    wbuf = of_wire_buffer_new_bind(buf, bytes, buf_free);
+    if (wbuf == NULL) {
+        return OF_ERROR_RESOURCE;
+    }
+
+    wobj->wbuf = wbuf;
+    wobj->owned = 1;
+    obj->length = bytes;
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Connect a child to a parent at the wire buffer level
+ *
+ * @param parent The top level object to bind to
+ * @param child The sub-object connecting to the parent
+ * @param offset The offset at which to attach the child RELATIVE 
+ * TO THE PARENT in the buffer
+ * @param bytes The amount of the buffer dedicated to the child; see below
+ * @param inc_ref_count Should the ref count of the parent be incremented
+ * 
+ * This is used for 'get' accessors for composite types as well as
+ * iterator functions for lists, both read (first/next) and write
+ * (append_init, append_advance).
+ *
+ * Connect a child object to a parent by setting up the child's
+ * wire_object to point to the parent's underlying buffer.  The value
+ * of the parameter bytes is important in determining how the child
+ * is initialized:
+ * @li If bytes <= 0, the length and type of the child are not modified;
+ * no additional space is added to the buffer.
+ * @li If bytes > 0, the current wire buffer is grown to 
+ * accomodate this many bytes.  This is to support append operations.
+ *
+ * If an error is returned, future references to the child object
+ * (until it is reinitialized) are undefined.
+ */
+static void
+object_child_attach(of_object_t *parent, of_object_t *child, 
+                       int offset, int bytes)
+{
+    of_wire_object_t *c_wobj; /* Pointer to child's wire object */
+    of_wire_buffer_t *wbuf; /* Pointer to common wire buffer manager */
+
+    child->parent = parent;
+    wbuf = parent->wire_object.wbuf;
+
+    /* Set up the child's wire buf to point to same as parent */
+    c_wobj = &child->wire_object;
+    c_wobj->wbuf = wbuf;
+    c_wobj->obj_offset = parent->wire_object.obj_offset + offset;
+    c_wobj->owned = 0;
+
+    /*
+     * bytes determines if this is a read or write setup.
+     * If > 0, grow the buffer to accomodate the space
+     * Otherwise do nothing
+     */
+    if (bytes > 0) { /* Set internal length, request buffer space */
+        int tot_bytes; /* Total bytes to request for buffer if updated */
+
+        /* Set up space for the child in the parent's buffer */
+        tot_bytes = parent->wire_object.obj_offset + offset + bytes;
+
+        of_wire_buffer_grow(wbuf, tot_bytes);
+        child->length = bytes;
+    }
+    /* if bytes == 0 don't do anything */
+}
+
+/**
+ * Check for room in an object's wire buffer.
+ * @param obj The object being checked
+ * @param new_len The desired length
+ * @return Boolean
+ */
+
+int
+of_object_can_grow(of_object_t *obj, int new_len)
+{
+    return OF_OBJECT_ABSOLUTE_OFFSET(obj, new_len) <=
+        WBUF_ALLOC_BYTES(obj->wire_object.wbuf);
+}
+
+/**
+ * Set the xid of a message object
+ * @param obj The object being accessed
+ * @param xid The xid value to store in the wire buffer
+ * @return OF_ERROR_
+ * Since the XID is common across all versions, this is used
+ * for all XID accessors.
+ */
+
+int
+of_object_xid_set(of_object_t *obj, uint32_t xid)
+{
+    of_wire_buffer_t *wbuf;
+
+    if ((wbuf = OF_OBJECT_TO_WBUF(obj)) == NULL) {
+        return OF_ERROR_PARAM;
+    }
+    of_wire_buffer_u32_set(wbuf, 
+        OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_MESSAGE_XID_OFFSET), xid);
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Get the xid of a message object
+ * @param obj The object being accessed
+ * @param xid Pointer to where to store the xid value
+ * @return OF_ERROR_
+ * Since the XID is common across all versions, this is used
+ * for all XID accessors.
+ */
+
+int
+of_object_xid_get(of_object_t *obj, uint32_t *xid)
+{
+    of_wire_buffer_t *wbuf;
+
+    if ((wbuf = OF_OBJECT_TO_WBUF(obj)) == NULL) {
+        return OF_ERROR_PARAM;
+    }
+    of_wire_buffer_u32_get(wbuf, 
+        OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_MESSAGE_XID_OFFSET), xid);
+    return OF_ERROR_NONE;
+}
+
+/****************************************************************
+ *
+ * Generic list operation implementations
+ *
+ ****************************************************************/
+
+/**
+ * Set up a child for appending to a parent list
+ * @param parent The parent; must be a list object
+ * @param child The child object; must be of type list element
+ * @return OF_ERROR_
+ *
+ * Attaches the wire buffer of the parent to the child by pointing
+ * the child to the end of the parent.
+ * 
+ * Set the wire length and type from the child.
+ * Update the parent length adding the current child length
+ *
+ * After calling this function, the child object may be updated
+ * resulting in changes to the parent's wire buffer
+ * 
+ */ 
+
+int
+of_list_append_bind(of_object_t *parent, of_object_t *child)
+{
+    if (parent == NULL || child == NULL ||
+           parent->wire_object.wbuf == NULL) {
+        return OF_ERROR_PARAM;
+    }
+
+    if (!of_object_can_grow(parent, parent->length + child->length)) {
+        return OF_ERROR_RESOURCE;
+    }
+
+    object_child_attach(parent, child, parent->length, 
+                        child->length);
+
+    /* Update the wire length and type if needed */
+    if (child->wire_length_set) {
+        child->wire_length_set(child, child->length);
+    }
+
+    if (child->wire_type_set) {
+        child->wire_type_set(child, child->object_id);
+    }
+
+    /* Update the parent's length */
+    of_object_parent_length_update(parent, child->length);
+
+    OF_LENGTH_CHECK_ASSERT(parent);
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Generic atomic list append operation
+ * @param list The list to which an item is being appended
+ * @param item THe item to append to the list
+ *
+ * The contents of the item are copied to the end of the list.
+ * Currently assumes the list is at the end of its parent.
+ */
+int
+of_list_append(of_object_t *list, of_object_t *item)
+{
+    int new_len;
+
+    new_len = list->length + item->length;
+
+    if (!of_object_can_grow(list, new_len)) {
+        return OF_ERROR_RESOURCE;
+    }
+
+    of_wire_buffer_grow(list->wire_object.wbuf,
+                        OF_OBJECT_ABSOLUTE_OFFSET(list, new_len));
+
+    MEMCPY(OF_OBJECT_BUFFER_INDEX(list, list->length),
+           OF_OBJECT_BUFFER_INDEX(item, 0), item->length);
+
+    /* Update the list's length */
+    of_object_parent_length_update(list, item->length);
+
+    OF_LENGTH_CHECK_ASSERT(list);
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Generic list first function
+ * @param parent The parent; must be a list object
+ * @param child The child object; must be of type list element
+ * @return OF_ERROR_RANGE if list is empty
+ * @return OF_ERROR_
+ *
+ * Sets up the child to point to the first element in the list
+ *
+ * Child init must be called before this is called.
+ *
+ * @note TREAT AS PRIVATE
+ * Does not fully initialized object
+ */
+int
+of_list_first(of_object_t *parent, of_object_t *child)
+{
+    if (parent->length == 0) { /* Empty list */
+        return OF_ERROR_RANGE;
+    }
+
+    object_child_attach(parent, child, 0, 0);
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Return boolean indicating if child is pointing to last entry in parent
+ * @param parent The parent; must be a list object
+ * @param child The child object; must be of type list element
+ * @return OF_ERROR_RANGE if list is empty
+ * @return OF_ERROR_
+ *
+ */
+static int
+of_list_is_last(of_object_t *parent, of_object_t *child)
+{
+    if (child->wire_object.obj_offset + child->length >= 
+        parent->wire_object.obj_offset + parent->length) {
+        return 1;
+    }
+
+    return 0;
+}
+
+/**
+ * Generic list next function
+ * @param parent The parent; must be a list object
+ * @param child The child object; must be of type list element
+ * @return OF_ERROR_RANGE if at end of list
+ * @return OF_ERROR_
+ *
+ * Advances the child to point to the subsequent element in the list.
+ * The wire buffer object must not have been modified since the 
+ * previous call to _first or _next.
+ *
+ * @note TREAT AS PRIVATE
+ * Does not fully initialized object
+ */ 
+int
+of_list_next(of_object_t *parent, of_object_t *child)
+{
+    int offset;
+
+    ASSERT(child->length > 0);
+
+    /* Get offset of parent */
+    if (of_list_is_last(parent, child)) {
+        return OF_ERROR_RANGE; /* We were on the last object */
+    }
+
+    /* Offset is relative to parent start */
+    offset = (child->wire_object.obj_offset - parent->wire_object.obj_offset) +
+        child->length;
+    object_child_attach(parent, child, offset, 0);
+
+    return OF_ERROR_NONE;
+}
+
+void
+of_object_wire_buffer_steal(of_object_t *obj, uint8_t **buffer)
+{
+    ASSERT(obj != NULL);
+    of_wire_buffer_steal(obj->wire_object.wbuf, buffer);
+    obj->wire_object.wbuf = NULL;
+}
+
+/*
+ * Set member:
+ *    get_wbuf_extent
+ *    find offset of start of member
+ *    if offset is at wbuf_extent (append new data)
+ *        copy data at extent
+ *        update parent length
+ *    else
+ *        find length of current entry
+ *        move from end of current to extent to create (or remove) space
+ *        copy data to offset
+ *        update my length -- NEED LOCAL INFO TO DO THIS for some cases
+ */
+
+/* Also need: get offset of member for all combinations */
+/* Also need: get length of member for all combinations */
+#if 0
+/**
+ * Append the wire buffer data from src to the end of dst's wire buffer
+ */
+int
+of_object_append_buffer(of_object_t *dst, of_object_t *src)
+{
+    of_wire_buffer_t *s_wbuf, *d_wbuf;
+    int orig_len, dst_offset, src_offset;
+
+    d_wbuf = OF_OBJECT_TO_WBUF(dst);
+    s_wbuf = OF_OBJECT_TO_WBUF(src);
+    dst_offset = dst->wire_object.obj_offset + dst_length;
+    src_offset = src->wire_object.obj_offset;
+    OF_WIRE_BUFFER_INIT_CHECK(d_wbuf, dst_offset + src->length);
+    MEMCPY(OF_WBUF_BUFFER_POINTER(d_wbuf, dst_offset),
+           OF_WBUF_BUFFER_POINTER(s_wbuf, 0), src->length);
+
+    orig_len = dst->length;
+    dst->length += src->length;
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Set the length of the actions object in a packet_in object
+ */
+
+int
+of_packet_out_actions_length_set(of_packet_t *obj, int len)
+{
+    if (obj == NULL || obj->object_id != OF_PACKET_IN ||
+        obj->wire_object.wbuf == NULL) {
+        return OF_ERROR_PARAM;
+    }
+
+    obj->actions_len_set(obj, len);
+}
+
+int
+_packet_out_data_offset_get(of_packet_t *obj)
+{
+    if (obj == NULL || obj->object_id != OF_PACKET_IN ||
+        obj->wire_object.wbuf == NULL) {
+        return -1;
+    }
+
+    return OF_PACKET_OUT_FIXED_LENGTH + _packet_out_actions_length_get(obj);
+}
+
+
+/**
+ * Simple length derivation function
+ *
+ * Most variable length fields are alone at the end of a structure.
+ * Their length is a simple calculation, just the total length of
+ * the parent minus the length of the non-variable part of the 
+ * parent's class type.
+ *
+ * @param parent The parent object
+ * @param length (out) Where to store the length of the final 
+ * variable length member
+ */
+int
+of_object_simple_length_derive(of_object_t *obj, int *length)
+{
+    
+}
+#endif
diff --git a/c_gen/templates/of_object.h b/c_gen/templates/of_object.h
new file mode 100644
index 0000000..cb9342e
--- /dev/null
+++ b/c_gen/templates/of_object.h
@@ -0,0 +1,176 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/*
+ * @fixme THIS FILE NEEDS CLEANUP.  It may just go away.
+ * 
+ * Low level internal header file.  Defines inheritance mechanism for
+ * LOCI objects.  In general, the routines in this file should not be
+ * called directly.  Rather the class-specific operations should be 
+ * used from loci.h.
+ *
+ * TREAT THESE FUNCTIONS AS PRIVATE.  THEY ARE GENERALLY HELPER
+ * FUNCTIONS FOR LOCI TYPE SPECIFIC IMPLEMENTATIONS
+ */
+
+#if !defined(_OF_OBJECT_H_)
+#define _OF_OBJECT_H_
+
+#include <loci/of_buffer.h>
+#include <loci/of_match.h>
+#include <loci/loci_base.h>
+#include <loci/of_message.h>
+
+#if defined(OF_OBJECT_TRACKING)
+#include <BigList/biglist.h>
+#endif
+
+/**
+ * This is the number of bytes reserved for metadata in each
+ * of_object_t instance.
+ */
+#define OF_OBJECT_METADATA_BYTES 32
+
+/****************************************************************
+ * General list operations: first, next, append_setup, append_advance
+ ****************************************************************/
+
+/* General list first operation */
+extern int of_list_first(of_object_t *parent, of_object_t *child);
+
+/* General list next operation */
+extern int of_list_next(of_object_t *parent, of_object_t *child);
+
+/* General list append bind operation */
+extern int of_list_append_bind(of_object_t *parent, of_object_t *child);
+
+/* Append a copy of item to list */
+extern int of_list_append(of_object_t *list, of_object_t *item);
+
+extern of_object_t *of_object_new(int bytes);
+extern of_object_t * of_object_dup_(of_object_t *src);
+
+/**
+ * Callback function prototype for deleting an object
+ */
+typedef void (*of_object_delete_callback_f)(of_object_t *obj);
+
+#if defined(OF_OBJECT_TRACKING)
+/**
+ * When tracking is enabled, the location of each new or dup
+ * call of an OF object is recorded and a list is kept of all
+ * outstanding objects.
+ *
+ * This dovetails with using objects to track outstanding operations
+ * for barrier processing.
+ */
+
+/**
+ * Global tracking stats
+ */
+typedef struct loci_object_track_s {
+    biglist_t *objects;
+    int count_current;
+    uint32_t count_max;
+    uint32_t allocs;
+    uint32_t deletes;
+} loci_object_track_t;
+
+extern loci_object_track_t loci_global_tracking;
+
+/* Remap dup call to tracking */
+extern of_object_t * of_object_dup_tracking(of_object_t *src,
+                                            const char *file, int line);
+#define of_object_dup(src) of_object_dup_tracking(src, __FILE__, __LINE__)
+extern void of_object_track(of_object_t *obj, const char *file, int line);
+
+extern void of_object_track_output(of_object_t *obj, loci_writer_f writer, void* cookie); 
+extern void of_object_track_report(loci_writer_f writer, void* cookie); 
+
+/**
+ * The data stored in each object related to tracking and
+ * The LOCI client may install a delete callback function to allow
+ * the notification of an object's destruction.
+ */
+
+typedef struct of_object_track_info_s {
+    of_object_delete_callback_f delete_cb;  /* To be implemented */
+    void *delete_cookie;
+
+    /* Track file and line where allocated */
+    const char *file;
+    int line;
+    biglist_t *bl_entry; /* Pointer to self */
+    uint32_t magic; /* validation value */
+} of_object_track_info_t;
+
+#define OF_OBJECT_TRACKING_MAGIC 0x11235813
+#else
+
+/* Use native dup call */
+#define of_object_dup of_object_dup_
+
+/**
+ * When tracking is not enabled, we still support a delete callback
+ */
+
+typedef struct of_object_track_info_s {
+    of_object_delete_callback_f delete_cb;  /* To be implemented */
+    void *delete_cookie;
+} of_object_track_info_t;
+
+#endif
+
+extern int of_object_xid_set(of_object_t *obj, uint32_t xid);
+extern int of_object_xid_get(of_object_t *obj, uint32_t *xid);
+
+/* Bind a buffer to an object, usually for parsing the buffer */
+extern int of_object_buffer_bind(of_object_t *obj, uint8_t *buf, 
+                                 int bytes, of_buffer_free_f buf_free);
+
+
+/**
+ * Steal a wire buffer from an object.
+ * @param obj The object whose buffer is being removed
+ * @param buffer[out] A handle for the pointer to the uint8_t * returned
+ *
+ * The wire buffer is taken from the object and its wirebuffer is set to
+ * NULL.  The ref_count of the wire buffer is not changed.
+ */
+extern void of_object_wire_buffer_steal(of_object_t *obj, uint8_t **buffer);
+extern int of_object_append_buffer(of_object_t *dst, of_object_t *src);
+
+extern of_object_t *of_object_new_from_message(of_message_t msg, int len);
+
+/* Delete an OpenFlow object without reference to its type */
+extern void of_object_delete(of_object_t *obj);
+
+int of_object_can_grow(of_object_t *obj, int new_len);
+
+#endif /* _OF_OBJECT_H_ */
diff --git a/c_gen/templates/of_type_maps.c b/c_gen/templates/of_type_maps.c
new file mode 100644
index 0000000..ef1df37
--- /dev/null
+++ b/c_gen/templates/of_type_maps.c
@@ -0,0 +1,799 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ *
+ * Functions related to mapping wire values to object types
+ * and lengths
+ *
+ ****************************************************************/
+
+#include <loci/loci.h>
+#include <loci/of_message.h>
+
+/****************************************************************
+ * Top level OpenFlow message length functions
+ ****************************************************************/
+
+/**
+ * Get the length of a message object as reported on the wire
+ * @param obj The object to check
+ * @param bytes (out) Where the length is stored
+ * @returns OF_ERROR_ code
+ */
+void
+of_object_message_wire_length_get(of_object_t *obj, int *bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+    // ASSERT(obj is message)
+    *bytes = of_message_length_get(OF_OBJECT_TO_MESSAGE(obj));
+}
+
+/**
+ * Set the length of a message object as reported on the wire
+ * @param obj The object to check
+ * @param bytes The new length of the object
+ * @returns OF_ERROR_ code
+ */
+void
+of_object_message_wire_length_set(of_object_t *obj, int bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    // ASSERT(obj is message)
+    of_message_length_set(OF_OBJECT_TO_MESSAGE(obj), bytes);
+}
+
+/****************************************************************
+ * TLV16 type/length functions
+ ****************************************************************/
+
+/**
+ * Many objects are TLVs and use uint16 for the type and length values
+ * stored on the wire at the beginning of the buffer.
+ */
+#define TLV16_WIRE_TYPE_OFFSET 0
+#define TLV16_WIRE_LENGTH_OFFSET 2
+
+/**
+ * Get the length field from the wire for a standard TLV
+ * object that uses uint16 for both type and length.
+ * @param obj The object being referenced
+ * @param bytes (out) Where to store the length
+ */
+
+void
+of_tlv16_wire_length_get(of_object_t *obj, int *bytes)
+{
+    uint16_t val16;
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    of_wire_buffer_u16_get(wbuf, 
+           OF_OBJECT_ABSOLUTE_OFFSET(obj, TLV16_WIRE_LENGTH_OFFSET), &val16);
+    *bytes = val16;
+}
+
+/**
+ * Set the length field in the wire buffer for a standard TLV
+ * object that uses uint16 for both type and length.
+ * @param obj The object being referenced
+ * @param bytes The length value to use
+ */
+
+void
+of_tlv16_wire_length_set(of_object_t *obj, int bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    of_wire_buffer_u16_set(wbuf, 
+        OF_OBJECT_ABSOLUTE_OFFSET(obj, TLV16_WIRE_LENGTH_OFFSET), bytes);
+}
+
+/**
+ * Get the type field from the wire for a standard TLV object that uses
+ * uint16 for both type and length.
+ * @param obj The object being referenced
+ * @param wire_type (out) Where to store the type
+ */
+
+static void
+of_tlv16_wire_type_get(of_object_t *obj, int *wire_type)
+{
+    uint16_t val16;
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+
+    of_wire_buffer_u16_get(wbuf, OF_OBJECT_ABSOLUTE_OFFSET(obj, 
+           TLV16_WIRE_TYPE_OFFSET), &val16);
+
+    *wire_type = val16;
+}
+
+/**
+ * Set the object ID based on the wire buffer for any TLV object
+ * @param obj The object being referenced
+ * @param id The ID value representing what should be stored.
+ */
+
+void
+of_tlv16_wire_object_id_set(of_object_t *obj, of_object_id_t id)
+{
+    int wire_type;
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    wire_type = of_object_to_type_map[obj->version][id];
+    ASSERT(wire_type >= 0);
+
+    of_wire_buffer_u16_set(wbuf, 
+        OF_OBJECT_ABSOLUTE_OFFSET(obj, TLV16_WIRE_TYPE_OFFSET), wire_type);
+
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        of_extension_object_id_set(obj, id);
+    }
+}
+
+/**
+ * Get the object ID of an extended action
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ * @fixme:  This should be auto generated
+ *
+ * If unable to map to known extension, set id to generic "experimenter"
+ */
+
+#define OF_ACTION_EXPERIMENTER_ID_OFFSET 4
+#define OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET 8
+
+
+static void
+extension_action_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    uint32_t exp_id;
+    uint8_t *buf;
+
+    *id = OF_ACTION_EXPERIMENTER;
+
+    buf = OF_OBJECT_BUFFER_INDEX(obj, 0);
+    
+    buf_u32_get(buf + OF_ACTION_EXPERIMENTER_ID_OFFSET, &exp_id);
+
+    switch (exp_id) {
+    case OF_EXPERIMENTER_ID_BSN: {
+        uint32_t subtype;
+        buf_u32_get(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, &subtype);
+        switch (subtype) {
+        case 1: *id = OF_ACTION_BSN_MIRROR; break;
+        case 2: *id = OF_ACTION_BSN_SET_TUNNEL_DST; break;
+        }
+        break;
+    }
+    case OF_EXPERIMENTER_ID_NICIRA: {
+        uint16_t subtype;
+        buf_u16_get(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, &subtype);
+        switch (subtype) {
+        case 18: *id = OF_ACTION_NICIRA_DEC_TTL; break;
+        }
+        break;
+    }
+    }
+}
+
+/**
+ * Set wire data for extension objects, not messages.
+ *
+ * Currently only handles BSN mirror; ignores all others
+ */
+
+void
+of_extension_object_id_set(of_object_t *obj, of_object_id_t id)
+{
+    uint8_t *buf = OF_OBJECT_BUFFER_INDEX(obj, 0);
+    
+    switch (id) {
+    case OF_ACTION_BSN_MIRROR:
+    case OF_ACTION_ID_BSN_MIRROR:
+        buf_u32_set(buf + OF_ACTION_EXPERIMENTER_ID_OFFSET,
+                    OF_EXPERIMENTER_ID_BSN);
+        buf_u32_set(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, 1);
+        break;
+    case OF_ACTION_BSN_SET_TUNNEL_DST:
+    case OF_ACTION_ID_BSN_SET_TUNNEL_DST:
+        buf_u32_set(buf + OF_ACTION_EXPERIMENTER_ID_OFFSET,
+                    OF_EXPERIMENTER_ID_BSN);
+        buf_u32_set(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, 2);
+        break;
+    case OF_ACTION_NICIRA_DEC_TTL:
+    case OF_ACTION_ID_NICIRA_DEC_TTL:
+        buf_u32_set(buf + OF_ACTION_EXPERIMENTER_ID_OFFSET,
+                    OF_EXPERIMENTER_ID_NICIRA);
+        buf_u16_set(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, 18);
+        break;
+    default:
+        break;
+    }
+}
+
+/**
+ * Get the object ID of an extended action
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ * @fixme:  This should be auto generated
+ *
+ * If unable to map to known extension, set id to generic "experimenter"
+ */
+
+static void
+extension_action_id_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    uint32_t exp_id;
+    uint8_t *buf;
+
+    *id = OF_ACTION_ID_EXPERIMENTER;
+
+    buf = OF_OBJECT_BUFFER_INDEX(obj, 0);
+    
+    buf_u32_get(buf + OF_ACTION_EXPERIMENTER_ID_OFFSET, &exp_id);
+
+    switch (exp_id) {
+    case OF_EXPERIMENTER_ID_BSN: {
+        uint32_t subtype;
+        buf_u32_get(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, &subtype);
+        switch (subtype) {
+        case 1: *id = OF_ACTION_ID_BSN_MIRROR; break;
+        case 2: *id = OF_ACTION_ID_BSN_SET_TUNNEL_DST; break;
+        }
+        break;
+    }
+    case OF_EXPERIMENTER_ID_NICIRA: {
+        uint16_t subtype;
+        buf_u16_get(buf + OF_ACTION_EXPERIMENTER_SUBTYPE_OFFSET, &subtype);
+        switch (subtype) {
+        case 18: *id = OF_ACTION_ID_NICIRA_DEC_TTL; break;
+        }
+        break;
+    }
+    }
+}
+
+
+/**
+ * Get the object ID based on the wire buffer for an action object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+
+void
+of_action_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        extension_action_object_id_get(obj, id);
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_ACTION_ITEM_COUNT);
+
+    *id = of_action_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+/**
+ * Get the object ID based on the wire buffer for an action ID object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+
+void
+of_action_id_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        extension_action_id_object_id_get(obj, id);
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_ACTION_ID_ITEM_COUNT);
+
+    *id = of_action_id_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+/**
+ * @fixme to do when we have instruction extensions
+ * See extension_action above
+ */
+
+static int
+extension_instruction_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    (void)obj;
+
+    *id = OF_INSTRUCTION_EXPERIMENTER;
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Get the object ID based on the wire buffer for an instruction object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+void
+of_instruction_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        extension_instruction_object_id_get(obj, id);
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_INSTRUCTION_ITEM_COUNT);
+
+    *id = of_instruction_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+
+/**
+ * @fixme to do when we have queue_prop extensions
+ * See extension_action above
+ */
+
+static void
+extension_queue_prop_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    (void)obj;
+
+    *id = OF_QUEUE_PROP_EXPERIMENTER;
+}
+
+/**
+ * Get the object ID based on the wire buffer for an queue_prop object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+void
+of_queue_prop_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        extension_queue_prop_object_id_get(obj, id);
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_QUEUE_PROP_ITEM_COUNT);
+
+    *id = of_queue_prop_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+
+/**
+ * @fixme to do when we have table_feature_prop extensions
+ * See extension_action above
+ */
+
+static void
+extension_table_feature_prop_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    (void)obj;
+
+    *id = OF_TABLE_FEATURE_PROP_EXPERIMENTER;
+}
+
+/**
+ * Table feature property object ID determination
+ *
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+void
+of_table_feature_prop_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        extension_table_feature_prop_object_id_get(obj, id);
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_TABLE_FEATURE_PROP_ITEM_COUNT);
+
+    *id = of_table_feature_prop_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+/**
+ * Get the object ID based on the wire buffer for meter_band object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+void
+of_meter_band_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    if (wire_type == OF_EXPERIMENTER_TYPE) {
+        *id = OF_METER_BAND_EXPERIMENTER;
+        return;
+    }
+
+    ASSERT(wire_type >= 0 && wire_type < OF_METER_BAND_ITEM_COUNT);
+
+    *id = of_meter_band_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+/**
+ * Get the object ID based on the wire buffer for a hello_elem object
+ * @param obj The object being referenced
+ * @param id Where to store the object ID
+ */
+
+void
+of_hello_elem_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    int wire_type;
+
+    of_tlv16_wire_type_get(obj, &wire_type);
+    ASSERT(wire_type >= 0 && wire_type < OF_HELLO_ELEM_ITEM_COUNT);
+    *id = of_hello_elem_type_to_id[obj->version][wire_type];
+    ASSERT(*id != OF_OBJECT_INVALID);
+}
+
+/****************************************************************
+ * OXM type/length functions.
+ ****************************************************************/
+
+/* Where does the OXM type-length header lie in the buffer */
+#define OXM_HDR_OFFSET 0
+
+/**
+ * Get the OXM header (type-length fields) from the wire buffer
+ * associated with an OXM object
+ *
+ * Will return if error; set hdr to the OXM header
+ */
+
+#define _GET_OXM_TYPE_LEN(obj, tl_p, wbuf) do {                         \
+        wbuf = OF_OBJECT_TO_WBUF(obj);                                  \
+        ASSERT(wbuf != NULL);                                           \
+        of_wire_buffer_u32_get(wbuf,                                    \
+            OF_OBJECT_ABSOLUTE_OFFSET(obj, OXM_HDR_OFFSET), (tl_p));    \
+    } while (0)
+
+#define _SET_OXM_TYPE_LEN(obj, tl_p, wbuf) do {                         \
+        wbuf = OF_OBJECT_TO_WBUF(obj);                                  \
+        ASSERT(wbuf != NULL);                                           \
+        of_wire_buffer_u32_set(wbuf,                                    \
+            OF_OBJECT_ABSOLUTE_OFFSET(obj, OXM_HDR_OFFSET), (tl_p));    \
+    } while (0)
+
+/**
+ * Get the length of an OXM object from the wire buffer
+ * @param obj The object whose wire buffer is an OXM type
+ * @param bytes (out) Where length is stored 
+ */
+
+void
+of_oxm_wire_length_get(of_object_t *obj, int *bytes)
+{
+    uint32_t type_len;
+    of_wire_buffer_t *wbuf;
+
+    _GET_OXM_TYPE_LEN(obj, &type_len, wbuf);
+    *bytes = OF_OXM_LENGTH_GET(type_len);
+}
+
+/**
+ * Set the length of an OXM object in the wire buffer
+ * @param obj The object whose wire buffer is an OXM type
+ * @param bytes Value to store in wire buffer
+ */
+
+void
+of_oxm_wire_length_set(of_object_t *obj, int bytes)
+{
+    uint32_t type_len;
+    of_wire_buffer_t *wbuf;
+
+    ASSERT(bytes >= 0 && bytes < 256);
+
+    /* Read-modify-write */
+    _GET_OXM_TYPE_LEN(obj, &type_len, wbuf);
+    OF_OXM_LENGTH_SET(type_len, bytes);
+    of_wire_buffer_u32_set(wbuf, 
+           OF_OBJECT_ABSOLUTE_OFFSET(obj, OXM_HDR_OFFSET), type_len);
+}
+
+/**
+ * Get the object ID of an OXM object based on the wire buffer type
+ * @param obj The object whose wire buffer is an OXM type
+ * @param id (out) Where the ID is stored 
+ */
+
+void
+of_oxm_wire_object_id_get(of_object_t *obj, of_object_id_t *id)
+{
+    uint32_t type_len;
+    int wire_type;
+    of_wire_buffer_t *wbuf;
+
+    _GET_OXM_TYPE_LEN(obj, &type_len, wbuf);
+    wire_type = OF_OXM_MASKED_TYPE_GET(type_len);
+    *id = of_oxm_to_object_id(wire_type, obj->version);
+}
+
+/**
+ * Set the wire type of an OXM object based on the object ID passed
+ * @param obj The object whose wire buffer is an OXM type
+ * @param id The object ID mapped to an OXM wire type which is stored
+ */
+
+void
+of_oxm_wire_object_id_set(of_object_t *obj, of_object_id_t id)
+{
+    uint32_t type_len;
+    int wire_type;
+    of_wire_buffer_t *wbuf;
+
+    ASSERT(OF_OXM_VALID_ID(id));
+
+    /* Read-modify-write */
+    _GET_OXM_TYPE_LEN(obj, &type_len, wbuf);
+    wire_type = of_object_to_wire_type(id, obj->version);
+    ASSERT(wire_type >= 0);
+    OF_OXM_MASKED_TYPE_SET(type_len, wire_type);
+    of_wire_buffer_u32_set(wbuf, 
+           OF_OBJECT_ABSOLUTE_OFFSET(obj, OXM_HDR_OFFSET), type_len);
+}
+
+
+
+#define OF_U16_LEN_LENGTH_OFFSET 0
+
+/**
+ * Get the wire length for an object with a uint16 length as first member
+ * @param obj The object being referenced
+ * @param bytes Pointer to location to store length
+ */
+void
+of_u16_len_wire_length_get(of_object_t *obj, int *bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    uint16_t u16;
+
+    ASSERT(wbuf != NULL);
+
+    of_wire_buffer_u16_get(wbuf, 
+           OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_U16_LEN_LENGTH_OFFSET),
+           &u16);
+
+    *bytes = u16;
+}
+
+/**
+ * Set the wire length for an object with a uint16 length as first member
+ * @param obj The object being referenced
+ * @param bytes The length of the object
+ */
+
+void
+of_u16_len_wire_length_set(of_object_t *obj, int bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    /* ASSERT(obj is u16-len entry) */
+
+    of_wire_buffer_u16_set(wbuf, 
+           OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_U16_LEN_LENGTH_OFFSET),
+           bytes);
+}
+
+
+#define OF_PACKET_QUEUE_LENGTH_OFFSET(ver) \
+    (((ver) >= OF_VERSION_1_2) ? 8 : 4)
+
+/**
+ * Get the wire length for a packet queue object
+ * @param obj The object being referenced
+ * @param bytes Pointer to location to store length
+ *
+ * The length is a uint16 at the offset indicated above
+ */
+void
+of_packet_queue_wire_length_get(of_object_t *obj, int *bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    uint16_t u16;
+    int offset;
+
+    ASSERT(wbuf != NULL);
+
+    /* ASSERT(obj is packet queue obj) */
+    offset = OF_PACKET_QUEUE_LENGTH_OFFSET(obj->version);
+    of_wire_buffer_u16_get(wbuf, OF_OBJECT_ABSOLUTE_OFFSET(obj, offset),
+                           &u16);
+
+    *bytes = u16;
+}
+
+/**
+ * Set the wire length for a 1.2 packet queue object
+ * @param obj The object being referenced
+ * @param bytes The length of the object
+ *
+ * The length is a uint16 at the offset indicated above
+ */
+
+void
+of_packet_queue_wire_length_set(of_object_t *obj, int bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    int offset;
+
+    ASSERT(wbuf != NULL);
+
+    /* ASSERT(obj is packet queue obj) */
+    offset = OF_PACKET_QUEUE_LENGTH_OFFSET(obj->version);
+    of_wire_buffer_u16_set(wbuf, OF_OBJECT_ABSOLUTE_OFFSET(obj, offset),
+                                  bytes);
+}
+
+/**
+ * Get the wire length for a meter band stats list
+ * @param obj The object being referenced
+ * @param bytes Pointer to location to store length
+ *
+ * Must a meter_stats object as a parent
+ */
+
+void
+of_list_meter_band_stats_wire_length_get(of_object_t *obj, int *bytes)
+{
+    ASSERT(obj->parent != NULL);
+    ASSERT(obj->parent->object_id == OF_METER_STATS);
+
+    /* We're counting on the parent being properly initialized already.
+     * The length is stored in a uint16 at offset 4 of the parent.
+     */
+    *bytes = obj->parent->length - OF_OBJECT_FIXED_LENGTH(obj->parent);
+}
+
+#define OF_METER_STATS_LENGTH_OFFSET 4
+
+/**
+ * Get/set the wire length for a meter stats object
+ * @param obj The object being referenced
+ * @param bytes Pointer to location to store length
+ *
+ * It's almost a TLV....
+ */
+
+void
+of_meter_stats_wire_length_get(of_object_t *obj, int *bytes)
+{
+    uint16_t val16;
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+    of_wire_buffer_u16_get(wbuf, 
+               OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_METER_STATS_LENGTH_OFFSET),
+               &val16);
+    *bytes = val16;
+}
+
+void
+of_meter_stats_wire_length_set(of_object_t *obj, int bytes)
+{
+    of_wire_buffer_t *wbuf = OF_OBJECT_TO_WBUF(obj);
+    ASSERT(wbuf != NULL);
+
+    of_wire_buffer_u16_set(wbuf, 
+        OF_OBJECT_ABSOLUTE_OFFSET(obj, OF_METER_STATS_LENGTH_OFFSET), bytes);
+}
+
+/*
+ * Non-message extension push wire values
+ */
+
+int
+of_extension_object_wire_push(of_object_t *obj)
+{
+    of_action_bsn_mirror_t *action_mirror;
+    of_action_id_bsn_mirror_t *action_id_mirror;
+    of_action_bsn_set_tunnel_dst_t *action_set_tunnel_dst;
+    of_action_id_bsn_set_tunnel_dst_t *action_id_set_tunnel_dst;
+    of_action_nicira_dec_ttl_t *action_nicira_dec_ttl;
+    of_action_id_nicira_dec_ttl_t *action_id_nicira_dec_ttl;
+
+    /* Push exp type, subtype */
+    switch (obj->object_id) {
+    case OF_ACTION_BSN_MIRROR:
+        action_mirror = (of_action_bsn_mirror_t *)obj;
+        of_action_bsn_mirror_experimenter_set(action_mirror,
+            OF_EXPERIMENTER_ID_BSN);
+        of_action_bsn_mirror_subtype_set(action_mirror, 1);
+        break;
+    case OF_ACTION_ID_BSN_MIRROR:
+        action_id_mirror = (of_action_id_bsn_mirror_t *)obj;
+        of_action_id_bsn_mirror_experimenter_set(action_id_mirror,
+            OF_EXPERIMENTER_ID_BSN);
+        of_action_id_bsn_mirror_subtype_set(action_id_mirror, 1);
+        break;
+    case OF_ACTION_BSN_SET_TUNNEL_DST:
+        action_set_tunnel_dst = (of_action_bsn_set_tunnel_dst_t *)obj;
+        of_action_bsn_set_tunnel_dst_experimenter_set(action_set_tunnel_dst,
+            OF_EXPERIMENTER_ID_BSN);
+        of_action_bsn_set_tunnel_dst_subtype_set(action_set_tunnel_dst, 2);
+        break;
+    case OF_ACTION_ID_BSN_SET_TUNNEL_DST:
+        action_id_set_tunnel_dst = (of_action_id_bsn_set_tunnel_dst_t *)obj;
+        of_action_id_bsn_set_tunnel_dst_experimenter_set(action_id_set_tunnel_dst,
+            OF_EXPERIMENTER_ID_BSN);
+        of_action_id_bsn_set_tunnel_dst_subtype_set(action_id_set_tunnel_dst, 2);
+        break;
+    case OF_ACTION_NICIRA_DEC_TTL:
+        action_nicira_dec_ttl = (of_action_nicira_dec_ttl_t *)obj;
+        of_action_nicira_dec_ttl_experimenter_set(action_nicira_dec_ttl,
+            OF_EXPERIMENTER_ID_NICIRA);
+        of_action_nicira_dec_ttl_subtype_set(action_nicira_dec_ttl, 18);
+        break;
+    case OF_ACTION_ID_NICIRA_DEC_TTL:
+        action_id_nicira_dec_ttl = (of_action_id_nicira_dec_ttl_t *)obj;
+        of_action_id_nicira_dec_ttl_experimenter_set(action_id_nicira_dec_ttl,
+            OF_EXPERIMENTER_ID_NICIRA);
+        of_action_id_nicira_dec_ttl_subtype_set(action_id_nicira_dec_ttl, 18);
+        break;
+    default:
+        break;
+    }
+
+    return OF_ERROR_NONE;
+}
diff --git a/c_gen/templates/of_utils.c b/c_gen/templates/of_utils.c
new file mode 100644
index 0000000..f9c1972
--- /dev/null
+++ b/c_gen/templates/of_utils.c
@@ -0,0 +1,78 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ * File: of_utils.h
+ *
+ * Some utilities provided based on LOCI code generation
+ *
+ ****************************************************************/
+
+#include <loci/of_utils.h>
+
+
+/**
+ * Check if the given port is used as an output for any entry on the list
+ * @param actions The list of actions being checked
+ * @param outport The port being sought
+ * @returns Boolean, true if entry has an output action to outport
+ *
+ * @fixme VERSION Currently only OF 1.0 supported
+ * @fixme Check for error return in accessor
+ *
+ * If outport is "wildcard", the test should be ignored, so return true
+ */
+
+int
+of_action_list_has_out_port(of_list_action_t *actions, of_port_no_t outport)
+{
+    of_action_t elt;
+    of_action_output_t *output;
+    int loop_rv;
+    of_port_no_t port_no;
+    int rv = 0;
+
+    if (outport == OF_PORT_DEST_WILDCARD) { /* Same as OFPP_ANY */
+        return 1;
+    }
+
+    output = &elt.output;
+    OF_LIST_ACTION_ITER(actions, &elt, loop_rv) {
+        if (output->object_id == OF_ACTION_OUTPUT) {
+            of_action_output_port_get(output, &port_no);
+            if (port_no == outport) {
+                rv = 1;
+                break;
+            }
+        }
+    }
+
+    return rv;
+}
+
diff --git a/c_gen/templates/of_utils.h b/c_gen/templates/of_utils.h
new file mode 100644
index 0000000..1a3a29a
--- /dev/null
+++ b/c_gen/templates/of_utils.h
@@ -0,0 +1,45 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ * File: of_utils.h
+ *
+ * Header file for some utilities provided based on LOCI code generation
+ *
+ ****************************************************************/
+
+#ifndef OF_UTILS_H
+#define OF_UTILS_H
+
+#include <loci/loci.h>
+
+extern int of_action_list_has_out_port(of_list_action_t *actions, 
+                                       of_port_no_t outport);
+
+#endif
diff --git a/c_gen/templates/of_wire_buf.c b/c_gen/templates/of_wire_buf.c
new file mode 100644
index 0000000..acfcb84
--- /dev/null
+++ b/c_gen/templates/of_wire_buf.c
@@ -0,0 +1,159 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+/****************************************************************
+ *
+ * of_wire_buf.c
+ *
+ * Implementation of more complicated of_wire_buf operations
+ *
+ ****************************************************************/
+
+#include <loci/loci.h>
+
+#if 0
+static of_match_v1_t *
+wire_buf_v1_to_match(of_wire_buffer_t *wbuf, int offset)
+{
+    of_match_v1_t *match;
+    match = of_match_v1_new(OF_VERSION_1_0);
+    return match;
+}
+
+/*
+ * First pass at wire buf to match conversions.  These involve a lot
+ * of copying and could be made more efficient.
+ */
+int
+of_wire_buffer_of_match_get(of_object_t *obj, int offset, of_match_t *match)
+{
+    switch (obj->version) {
+    case OF_VERSION_1_0:
+        break;
+    case OF_VERSION_1_1:
+        break;
+    case OF_VERSION_1_2:
+        break;
+    default:
+        return OF_ERROR_VERSION;
+        break;
+    }
+
+    return OF_ERROR_NONE;
+}
+
+/**
+ * Write a host match structure into the wire buffer
+ * @param obj The object that owns the buffer to write to
+ * @param offset The start location in the wire buffer
+ * @param match Pointer to the host match object
+ * @param cur_len The current length of the object in the buffer
+ *
+ * If the current length is different than the length of the new data
+ * being written, then move the remains of the buffer.  This only applies
+ * to version 1.2 (though it should apply to 1.1).
+ */
+
+int
+of_wire_buffer_of_match_set(of_object_t *obj, int offset, 
+                            of_match_t *match, int cur_len)
+{
+    // of_octets_t octets;
+
+    switch (obj->version) {
+    case OF_VERSION_1_0:
+        break;
+    case OF_VERSION_1_1:
+        break;
+    case OF_VERSION_1_2:
+        // of_match_v3_serialize(match, &octets);
+        break;
+    default:
+        return OF_ERROR_VERSION;
+        break;
+    }
+
+    return OF_ERROR_NONE;
+}
+#endif
+
+/**
+ * Replace data in the data buffer, possibly with a new
+ * length or appending to buffer.
+ *
+ * @param wbuf The wire buffer being updated.
+ * @param offset The start point of the update
+ * @param old_len The number of bytes being replaced
+ * @param data Source of bytes to write into the buffer
+ * @param new_len The number of bytes to write
+ *
+ * The buffer may grow for this operation.  Current byte count
+ * is pre-grow for the replace.
+ *
+ * The current byte count for the buffer is updated.
+ * 
+ */
+
+void
+of_wire_buffer_replace_data(of_wire_buffer_t *wbuf, 
+                            int offset, 
+                            int old_len,
+                            uint8_t *data,
+                            int new_len)
+{
+    int bytes;
+    uint8_t *src_ptr, *dst_ptr;
+    int cur_bytes;
+
+    ASSERT(wbuf != NULL);
+
+    cur_bytes = wbuf->current_bytes;
+
+    /* Doesn't make sense; mismatch in current buffer info */
+    ASSERT(old_len + offset <= wbuf->current_bytes);
+
+    if (old_len < new_len) {
+        of_wire_buffer_grow(wbuf, offset + new_len);
+    } else {
+        wbuf->current_bytes += (new_len - old_len); // may decrease size
+    }
+
+    if ((old_len + offset < cur_bytes) && (old_len != new_len)) {
+        /* Need to move back of buffer */
+        src_ptr = &wbuf->buf[offset + old_len];
+        dst_ptr = &wbuf->buf[offset + new_len];
+        bytes = cur_bytes - (offset + old_len);
+        MEMMOVE(dst_ptr, src_ptr, bytes);
+    }
+
+    dst_ptr = &wbuf->buf[offset];
+    MEMCPY(dst_ptr, data, new_len);
+
+    ASSERT(wbuf->current_bytes == cur_bytes + (new_len - old_len));
+}
diff --git a/c_gen/templates/of_wire_buf.h b/c_gen/templates/of_wire_buf.h
new file mode 100644
index 0000000..cf1ee17
--- /dev/null
+++ b/c_gen/templates/of_wire_buf.h
@@ -0,0 +1,884 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+/* Copyright 2013, Big Switch Networks, Inc. */
+
+#if !defined(_OF_WIRE_BUF_H_)
+#define _OF_WIRE_BUF_H_
+
+#include <string.h>
+#include <loci/loci_base.h>
+#include <loci/of_object.h>
+#include <loci/of_match.h>
+#include <loci/of_buffer.h>
+
+/****************************************************************
+ *
+ * Wire buffer declaration, constructor, data alloc, delete
+ *
+ ****************************************************************/
+
+/* Maximum length of an OpenFlow message. All wire buffers allocated for
+ * new objects (that don't come from a message) are this length to avoid
+ * needing to grow the buffers. */
+#define OF_WIRE_BUFFER_MAX_LENGTH 65535
+
+/**
+ * Buffer management structure
+ */
+typedef struct of_wire_buffer_s {
+    /** Pointer to a monolithic data buffer */
+    uint8_t *buf;
+
+    /** Length of buffer actually allocated */
+    int alloc_bytes;
+    /** Current extent actually used */
+    int current_bytes;
+    /** If not NULL, use this to dealloc buf */
+    of_buffer_free_f free;
+} of_wire_buffer_t;
+
+/**
+ * Decouples object from underlying wire buffer
+ *
+ * Called a 'slice' in some places.
+ */
+typedef struct of_wire_object_s {
+    /** A pointer to the underlying buffer's management structure. */
+    of_wire_buffer_t *wbuf;  
+    /** The start offset for this object relative to the start of the
+     * underlying buffer */
+    int obj_offset;
+    /* Boolean, whether the object owns the wire buffer. */
+    char owned;
+} of_wire_object_t;
+
+#define WBUF_BUF(wbuf) (wbuf)->buf
+#define WBUF_ALLOC_BYTES(wbuf) (wbuf)->alloc_bytes
+#define WBUF_CURRENT_BYTES(wbuf) (wbuf)->current_bytes
+
+/**
+ * For read access, throw an error code if current buffer
+ * is not big enough.
+ * @param wbuf Pointer to an of_wire_buffer_t structure
+ * @param offset The extent of the buffer required
+ */
+#define OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset)                      \
+    ASSERT(((wbuf) != NULL) && (WBUF_BUF(wbuf) != NULL) &&             \
+           (offset > 0) && (WBUF_CURRENT_BYTES(wbuf) >= offset))
+
+/*
+ * Index a wire buffer
+ * Index a wire object (from obj_offset)
+ * Index a LOCI object
+ */
+
+/**
+ * Return a pointer to a particular offset in a wire buffer's data
+ * @param wbuf Pointer to an of_wire_buffer_t structure
+ * @param offset The location to reference
+ */
+#define OF_WIRE_BUFFER_INDEX(wbuf, offset) (&((WBUF_BUF(wbuf))[offset]))
+
+/**
+ * Return a pointer to a particular offset in the underlying buffer
+ * associated with a wire object
+ * @param wobj Pointer to an of_wire_object_t structure
+ * @param offset The location to reference relative to the start of the object
+ */
+#define OF_WIRE_OBJECT_INDEX(wobj, offset) \
+    OF_WIRE_BUFFER_INDEX((wobj)->wbuf, (offset) + (wobj)->obj_offset)
+
+/****************************************************************
+ * Object specific macros; of_object_t includes a wire_object
+ ****************************************************************/
+
+/**
+ * Return a pointer to a particular offset in the underlying buffer
+ * associated with a wire object
+ * @param obj Pointer to an of_object_t object
+ * @param offset The location to reference relative to the start of the object
+ */
+#define OF_OBJECT_BUFFER_INDEX(obj, offset) \
+    OF_WIRE_OBJECT_INDEX(&((obj)->wire_object), offset)
+
+/**
+ * Return the absolute offset as an integer from a object-relative offset
+ * @param obj Pointer to an of_wire_object_t structure
+ * @param offset The location to reference relative to the start of the object
+ */
+#define OF_OBJECT_ABSOLUTE_OFFSET(obj, offset) \
+    ((obj)->wire_object.obj_offset + offset)
+
+
+/**
+ * Map a generic object to the underlying wire buffer object (not the octets)
+ *
+ * Treat as private
+ */
+#define OF_OBJECT_TO_WBUF(obj) ((obj)->wire_object.wbuf)
+
+
+
+/**
+ * Minimum allocation size for wire buffer object
+ */
+#define OF_WIRE_BUFFER_MIN_ALLOC_BYTES 128
+
+/**
+ * Allocate a wire buffer object and the underlying data buffer.
+ * The wire buffer is initally empty (current_bytes == 0).
+ * @param a_bytes The number of bytes to allocate.
+ * @returns A wire buffer object if successful or NULL
+ */
+static inline of_wire_buffer_t *
+of_wire_buffer_new(int a_bytes)
+{
+    of_wire_buffer_t *wbuf;
+
+    wbuf = (of_wire_buffer_t *)MALLOC(sizeof(of_wire_buffer_t));
+    if (wbuf == NULL) {
+        return NULL;
+    }
+    MEMSET(wbuf, 0, sizeof(of_wire_buffer_t));
+
+    if (a_bytes < OF_WIRE_BUFFER_MIN_ALLOC_BYTES) {
+        a_bytes = OF_WIRE_BUFFER_MIN_ALLOC_BYTES;
+    }
+
+    if ((wbuf->buf = (uint8_t *)MALLOC(a_bytes)) == NULL) {
+        FREE(wbuf);
+        return NULL;
+    }
+    MEMSET(wbuf->buf, 0, a_bytes);
+    wbuf->current_bytes = 0;
+    wbuf->alloc_bytes = a_bytes;
+
+    return (of_wire_buffer_t *)wbuf;
+}
+
+/**
+ * Allocate a wire buffer object and bind it to an existing buffer.
+ * @param buf       Existing buffer.
+ * @param bytes     Size of buf.
+ * @param buf_free  Function called to deallocate buf.
+ * @returns A wire buffer object if successful or NULL
+ */
+static inline of_wire_buffer_t *
+of_wire_buffer_new_bind(uint8_t *buf, int bytes, of_buffer_free_f buf_free)
+{
+    of_wire_buffer_t *wbuf;
+
+    wbuf = (of_wire_buffer_t *)MALLOC(sizeof(of_wire_buffer_t));
+    if (wbuf == NULL) {
+        return NULL;
+    }
+
+    wbuf->buf = buf;
+    wbuf->free = buf_free;
+    wbuf->current_bytes = bytes;
+    wbuf->alloc_bytes = bytes;
+
+    return (of_wire_buffer_t *)wbuf;
+}
+
+static inline void
+of_wire_buffer_free(of_wire_buffer_t *wbuf)
+{
+    if (wbuf == NULL) return;
+
+    if (wbuf->buf != NULL) {
+        if (wbuf->free != NULL) {
+            wbuf->free(wbuf->buf);
+        } else {
+            FREE(wbuf->buf);
+        }
+    }
+
+    FREE(wbuf);
+}
+
+static inline void
+of_wire_buffer_steal(of_wire_buffer_t *wbuf, uint8_t **buffer)
+{
+    *buffer = wbuf->buf;
+    /* Mark underlying data buffer as taken */
+    wbuf->buf = NULL;
+    of_wire_buffer_free(wbuf);
+}
+
+/**
+ * Increase the currently used length of the wire buffer.
+ * Will fail an assertion if the allocated length is not long enough.
+ *
+ * @param wbuf Pointer to the wire buffer structure
+ * @param bytes Total number of bytes buffer should grow to
+ */
+
+static inline void
+of_wire_buffer_grow(of_wire_buffer_t *wbuf, int bytes)
+{
+    ASSERT(wbuf != NULL);
+    ASSERT(wbuf->alloc_bytes >= bytes);
+    if (bytes > wbuf->current_bytes) {
+        wbuf->current_bytes = bytes;
+    }
+}
+
+/* TBD */
+
+
+/**
+ * Get a uint8_t scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u8_get(of_wire_buffer_t *wbuf, int offset, uint8_t *value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint8_t));
+    buf_u8_get(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Set a uint8_t scalar in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u8_set(of_wire_buffer_t *wbuf, int offset, uint8_t value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint8_t));
+    buf_u8_set(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Get a uint16_t scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u16_get(of_wire_buffer_t *wbuf, int offset, uint16_t *value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint16_t));
+    buf_u16_get(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Set a uint16_t scalar in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u16_set(of_wire_buffer_t *wbuf, int offset, uint16_t value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint16_t));
+    buf_u16_set(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Get a uint32_t scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u32_get(of_wire_buffer_t *wbuf, int offset, uint32_t *value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint32_t));
+    buf_u32_get(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Set a uint32_t scalar in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u32_set(of_wire_buffer_t *wbuf, int offset, uint32_t value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint32_t));
+    buf_u32_set(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Get a uint64_t scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u64_get(of_wire_buffer_t *wbuf, int offset, uint64_t *value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint64_t));
+    buf_u64_get(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Set a uint64_t scalar in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ *
+ * The underlying buffer accessor funtions handle endian and alignment.
+ */
+
+static inline void
+of_wire_buffer_u64_set(of_wire_buffer_t *wbuf, int offset, uint64_t value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + (int) sizeof(uint64_t));
+    buf_u64_set(OF_WIRE_BUFFER_INDEX(wbuf, offset), value);
+}
+
+/**
+ * Get a generic OF match structure from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to the structure to update
+ *
+ * NOT IMPLEMENTED.
+ *
+ */
+
+static inline void
+of_wire_buffer_match_get(int version, of_wire_buffer_t *wbuf, int offset,
+                      of_match_t *value)
+{
+    ASSERT(0);
+}
+
+/**
+ * Set a generic OF match structure in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to the structure to store
+ *
+ * NOT IMPLEMENTED.
+ *
+ */
+
+static inline void
+of_wire_buffer_match_set(int version, of_wire_buffer_t *wbuf, int offset,
+                      of_match_t *value)
+{
+    ASSERT(0);
+}
+
+/**
+ * Get a port description object from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to the structure to fill out
+ *
+ * NOT IMPLEMENTED.
+ *
+ * @fixme Where should this go?
+ */
+
+static inline void
+of_wire_buffer_of_port_desc_get(int version, of_wire_buffer_t *wbuf, int offset,
+                             void *value)
+{
+    ASSERT(0);
+}
+
+/**
+ * Set a port description object in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to the structure to fill out
+ *
+ * NOT IMPLEMENTED.
+ *
+ * @fixme Where should this go?
+ */
+
+static inline void
+of_wire_buffer_of_port_desc_set(int version, of_wire_buffer_t *wbuf, int offset,
+                             void *value)
+{
+    ASSERT(0);
+}
+
+/**
+ * Get a port number scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * Port numbers are version specific.
+ */
+
+static inline void
+of_wire_buffer_port_no_get(int version, of_wire_buffer_t *wbuf, int offset,
+                        of_port_no_t *value)
+{
+    uint16_t v16;
+    uint32_t v32;
+
+    switch (version) {
+    case OF_VERSION_1_0:
+        of_wire_buffer_u16_get(wbuf, offset, &v16);
+        *value = v16;
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u32_get(wbuf, offset, &v32);
+        *value = v32;
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/**
+ * Set a port number scalar from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store in the buffer
+ *
+ * Port numbers are version specific.
+ */
+
+static inline void
+of_wire_buffer_port_no_set(int version, of_wire_buffer_t *wbuf, int offset,
+                      of_port_no_t value)
+{
+
+    switch (version) {
+    case OF_VERSION_1_0:
+        of_wire_buffer_u16_set(wbuf, offset, (uint16_t)value);
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u32_set(wbuf, offset, (uint32_t)value);
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/**
+ * Get a flow mod command value from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ */
+
+static inline void
+of_wire_buffer_fm_cmd_get(int version, of_wire_buffer_t *wbuf, int offset,
+                        of_fm_cmd_t *value)
+{
+    uint16_t v16;
+    uint8_t v8;
+
+    switch (version) {
+    case OF_VERSION_1_0:
+        of_wire_buffer_u16_get(wbuf, offset, &v16);
+        *value = v16;
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u8_get(wbuf, offset, &v8);
+        *value = v8;
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/**
+ * Set a flow mod command value in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ */
+
+static inline void
+of_wire_buffer_fm_cmd_set(int version, of_wire_buffer_t *wbuf, int offset,
+                      of_fm_cmd_t value)
+{
+    switch (version) {
+    case OF_VERSION_1_0:
+        of_wire_buffer_u16_set(wbuf, offset, (uint16_t)value);
+        break;
+    case OF_VERSION_1_1:
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u8_set(wbuf, offset, (uint8_t)value);
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/**
+ * Get a wild card bitmap value from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to store the value
+ */
+
+static inline void
+of_wire_buffer_wc_bmap_get(int version, of_wire_buffer_t *wbuf, int offset,
+                        of_wc_bmap_t *value)
+{
+    uint32_t v32;
+    uint64_t v64;
+
+    switch (version) {
+    case OF_VERSION_1_0:
+    case OF_VERSION_1_1:
+        of_wire_buffer_u32_get(wbuf, offset, &v32);
+        *value = v32;
+        break;
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u64_get(wbuf, offset, &v64);
+        *value = v64;
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/**
+ * Set a wild card bitmap value in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value The value to store
+ */
+
+static inline void
+of_wire_buffer_wc_bmap_set(int version, of_wire_buffer_t *wbuf, int offset,
+                      of_wc_bmap_t value)
+{
+    switch (version) {
+    case OF_VERSION_1_0:
+    case OF_VERSION_1_1:
+        of_wire_buffer_u32_set(wbuf, offset, (uint32_t)value);
+        break;
+    case OF_VERSION_1_2:
+    case OF_VERSION_1_3:
+        of_wire_buffer_u64_set(wbuf, offset, (uint64_t)value);
+        break;
+    default:
+        ASSERT(0);
+    }
+}
+
+/* match bitmap and wildcard bitmaps followed the same pattern */
+#define of_wire_buffer_match_bmap_get of_wire_buffer_wc_bmap_get
+#define of_wire_buffer_match_bmap_set of_wire_buffer_wc_bmap_set
+
+/* Derived functions, mostly for fixed length name strings */
+#define of_wire_buffer_char_get of_wire_buffer_u8_get
+#define of_wire_buffer_char_set of_wire_buffer_u8_set
+
+
+/**
+ * Get an octet object from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to where to put value
+ *
+ * of_octets_t is treated specially as the high level functions pass around
+ * pointers for "get" operators.
+ *
+ * Important: The length of data to copy is stored in the value->bytes
+ * variable.
+ */
+
+static inline void
+of_wire_buffer_octets_data_get(of_wire_buffer_t *wbuf, int offset,
+                               of_octets_t *value)
+{
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + OF_OCTETS_BYTES_GET(value));
+    buf_octets_get(OF_WIRE_BUFFER_INDEX(wbuf, offset),
+                   OF_OCTETS_POINTER_GET(value),
+                   OF_OCTETS_BYTES_GET(value));
+}
+
+/**
+ * Set an octet object in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param value Pointer to the octet stream to store
+ * @param cur_len Current length of data in the buffer
+ *
+ * of_octets_t is treated specially as the high level functions pass around
+ * pointers for "get" operators.
+ *
+ * @fixme Need to take into account cur_len
+ */
+
+static inline void
+of_wire_buffer_octets_data_set(of_wire_buffer_t *wbuf, int offset,
+                               of_octets_t *value, int cur_len)
+{
+    // FIXME need to adjust length of octets member in buffer
+    ASSERT(cur_len == 0 || cur_len == value->bytes);
+
+    OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, offset + OF_OCTETS_BYTES_GET(value));
+    buf_octets_set(OF_WIRE_BUFFER_INDEX(wbuf, offset),
+                   OF_OCTETS_POINTER_GET(value),
+                   OF_OCTETS_BYTES_GET(value));
+}
+
+static inline void
+_wbuf_octets_set(of_wire_buffer_t *wbuf, int offset, uint8_t *src, int bytes) {
+    of_octets_t octets;
+    OF_OCTETS_POINTER_SET(&octets, src);
+    OF_OCTETS_BYTES_SET(&octets, bytes);
+    of_wire_buffer_octets_data_set(wbuf, offset, &octets, bytes);
+}
+
+static inline void
+_wbuf_octets_get(of_wire_buffer_t *wbuf, int offset, uint8_t *dst, int bytes) {
+    of_octets_t octets;
+    OF_OCTETS_POINTER_SET(&octets, dst);
+    OF_OCTETS_BYTES_SET(&octets, bytes);
+    of_wire_buffer_octets_data_get(wbuf, offset, &octets);
+}
+
+
+/**
+ * Get a MAC address from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param mac Pointer to the mac address location
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_mac_get(buf, offset, mac) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)mac, 6)
+
+/**
+ * Set a MAC address in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param mac The variable holding the mac address to store
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_mac_set(buf, offset, mac) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)&mac, 6)
+
+
+/**
+ * Get a port name string from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param mac The mac address
+ *
+ * Uses the octets function.
+ */
+#define of_wire_buffer_port_name_get(buf, offset, portname) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)portname, \
+                           OF_MAX_PORT_NAME_LEN)
+
+/**
+ * Set a port name address in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param portname Where to store the port name
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_port_name_set(buf, offset, portname) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)portname, \
+                           OF_MAX_PORT_NAME_LEN)
+
+
+/**
+ * Get a table name string from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param portname The port name
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_tab_name_get(buf, offset, tabname) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)tabname, \
+                           OF_MAX_TABLE_NAME_LEN)
+
+/**
+ * Set a table name address in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param mac Where to store the table name
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_tab_name_set(buf, offset, tabname) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)tabname, \
+                           OF_MAX_TABLE_NAME_LEN)
+
+/**
+ * Get a description string from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param desc Where to store the description string
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_desc_str_get(buf, offset, desc) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)desc, OF_DESC_STR_LEN)
+
+/**
+ * Set a description string in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param desc The description string
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_desc_str_set(buf, offset, desc) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)desc, OF_DESC_STR_LEN)
+
+/**
+ * Get a serial number string from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param sernum Where to store the serial number string
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_ser_num_get(buf, offset, sernum) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)sernum, OF_SERIAL_NUM_LEN)
+
+/**
+ * Set a serial number string in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param desc The serial number string
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_ser_num_set(buf, offset, sernum) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)sernum, OF_SERIAL_NUM_LEN)
+
+/**
+ * Get an ipv6 address from a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param addr Pointer to where to store the ipv6 address
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_ipv6_get(buf, offset, addr) \
+    _wbuf_octets_get(buf, offset, (uint8_t *)addr, sizeof(of_ipv6_t))
+
+/**
+ * Set an ipv6 address in a wire buffer
+ * @param wbuf The pointer to the wire buffer structure
+ * @param offset Offset in the wire buffer
+ * @param addr The variable holding ipv6 address to store
+ *
+ * Uses the octets function.
+ */
+
+#define of_wire_buffer_ipv6_set(buf, offset, addr) \
+    _wbuf_octets_set(buf, offset, (uint8_t *)&addr, sizeof(of_ipv6_t))
+
+/* Relocate data from start offset to the end of the buffer to a new position */
+static inline void
+of_wire_buffer_move_end(of_wire_buffer_t *wbuf, int start_offset, int new_offset)
+{
+    int bytes;
+    int new_length;
+
+    if (new_offset > start_offset) {
+        bytes =  new_offset - start_offset;
+        new_length = wbuf->alloc_bytes + bytes;
+        OF_WIRE_BUFFER_ACCESS_CHECK(wbuf, new_length);
+    } else {
+        bytes =  start_offset - new_offset;
+        new_length = wbuf->alloc_bytes - bytes;
+    }
+
+    MEMMOVE(&wbuf->buf[new_offset], &wbuf->buf[start_offset], bytes);
+    wbuf->alloc_bytes = new_length;
+}
+
+/* Given a wire buffer object and the offset of the start of an of_match struct,
+ * return its total length in the buffer
+ */
+static inline int
+of_match_bytes(of_wire_buffer_t *wbuf, int offset) {
+    uint16_t len;
+    of_wire_buffer_u16_get(wbuf, offset + 2, &len);
+    return OF_MATCH_BYTES(len);
+}
+
+extern void
+of_wire_buffer_replace_data(of_wire_buffer_t *wbuf, 
+                            int offset, 
+                            int old_len,
+                            uint8_t *data,
+                            int new_len);
+
+#endif /* _OF_WIRE_BUF_H_ */
diff --git a/c_gen/util.py b/c_gen/util.py
new file mode 100644
index 0000000..d4b25bf
--- /dev/null
+++ b/c_gen/util.py
@@ -0,0 +1,41 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+Utilities for generating the target C code
+"""
+import os
+import loxi_utils.loxi_utils as utils
+
+templates_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates')
+template_path = [templates_dir, templates_dir + '/locitest']
+
+def render_template(out, name, **context):
+    utils.render_template(out, name, template_path, context)
+
+def render_static(out, name):
+    utils.render_static(out, name, template_path)
diff --git a/canonical/openflow.h-1.0 b/canonical/openflow.h-1.0
new file mode 100644
index 0000000..c0b5090
--- /dev/null
+++ b/canonical/openflow.h-1.0
@@ -0,0 +1,970 @@
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford
+ * Junior University
+ *
+ * We are making the OpenFlow specification and associated documentation
+ * (Software) available for public use and benefit with the expectation
+ * that others will use, modify and enhance the Software and contribute
+ * those enhancements back to the community. However, since we would
+ * like to make the Software available for broadest use, with as few
+ * restrictions as possible permission is hereby granted, free of
+ * charge, to any person obtaining a copy of this Software to deal in
+ * the Software under the copyrights without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT.  IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * The name and trademarks of copyright holder(s) may NOT be used in
+ * advertising or publicity pertaining to the Software or any
+ * derivatives without specific, written prior permission.
+ */
+
+/* OpenFlow: protocol between controller and datapath. */
+
+#ifndef OPENFLOW_OPENFLOW_H
+#define OPENFLOW_OPENFLOW_H 1
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+#ifdef SWIG
+#define OFP_ASSERT(EXPR)        /* SWIG can't handle OFP_ASSERT. */
+#elif !defined(__cplusplus)
+/* Build-time assertion for use in a declaration context. */
+#define OFP_ASSERT(EXPR)                                                \
+        extern int (*build_assert(void))[ sizeof(struct {               \
+                    unsigned int build_assert_failed : (EXPR) ? 1 : -1; })]
+#else /* __cplusplus */
+#define OFP_ASSERT(_EXPR) typedef int build_assert_failed[(_EXPR) ? 1 : -1]
+#endif /* __cplusplus */
+
+#ifndef SWIG
+#define OFP_PACKED __attribute__((packed))
+#else
+#define OFP_PACKED              /* SWIG doesn't understand __attribute. */
+#endif
+
+/* Version number:
+ * Non-experimental versions released: 0x01
+ * Experimental versions released: 0x81 -- 0x99
+ */
+/* The most significant bit being set in the version field indicates an
+ * experimental OpenFlow version.
+ */
+#define OFP_VERSION   0x01
+
+#define OFP_MAX_TABLE_NAME_LEN 32
+#define OFP_MAX_PORT_NAME_LEN  16
+
+#define OFP_TCP_PORT  6633
+#define OFP_SSL_PORT  6633
+
+#define OFP_ETH_ALEN 6          /* Bytes in an Ethernet address. */
+
+/* Port numbering.  Physical ports are numbered starting from 1. */
+enum ofp_port {
+    /* Maximum number of physical switch ports. */
+    OFPP_MAX = 0xff00,
+
+    /* Fake output "ports". */
+    OFPP_IN_PORT    = 0xfff8,  /* Send the packet out the input port.  This
+                                  virtual port must be explicitly used
+                                  in order to send back out of the input
+                                  port. */
+    OFPP_TABLE      = 0xfff9,  /* Perform actions in flow table.
+                                  NB: This can only be the destination
+                                  port for packet-out messages. */
+    OFPP_NORMAL     = 0xfffa,  /* Process with normal L2/L3 switching. */
+    OFPP_FLOOD      = 0xfffb,  /* All physical ports except input port and
+                                  those disabled by STP. */
+    OFPP_ALL        = 0xfffc,  /* All physical ports except input port. */
+    OFPP_CONTROLLER = 0xfffd,  /* Send to controller. */
+    OFPP_LOCAL      = 0xfffe,  /* Local openflow "port". */
+    OFPP_NONE       = 0xffff   /* Not associated with a physical port. */
+};
+
+enum ofp_type {
+    /* Immutable messages. */
+    OFPT_HELLO,               /* Symmetric message */
+    OFPT_ERROR,               /* Symmetric message */
+    OFPT_ECHO_REQUEST,        /* Symmetric message */
+    OFPT_ECHO_REPLY,          /* Symmetric message */
+    OFPT_VENDOR,              /* Symmetric message */
+
+    /* Switch configuration messages. */
+    OFPT_FEATURES_REQUEST,    /* Controller/switch message */
+    OFPT_FEATURES_REPLY,      /* Controller/switch message */
+    OFPT_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REPLY,    /* Controller/switch message */
+    OFPT_SET_CONFIG,          /* Controller/switch message */
+
+    /* Asynchronous messages. */
+    OFPT_PACKET_IN,           /* Async message */
+    OFPT_FLOW_REMOVED,        /* Async message */
+    OFPT_PORT_STATUS,         /* Async message */
+
+    /* Controller command messages. */
+    OFPT_PACKET_OUT,          /* Controller/switch message */
+    OFPT_FLOW_MOD,            /* Controller/switch message */
+    OFPT_PORT_MOD,            /* Controller/switch message */
+
+    /* Statistics messages. */
+    OFPT_STATS_REQUEST,       /* Controller/switch message */
+    OFPT_STATS_REPLY,         /* Controller/switch message */
+
+    /* Barrier messages. */
+    OFPT_BARRIER_REQUEST,     /* Controller/switch message */
+    OFPT_BARRIER_REPLY,       /* Controller/switch message */
+
+    /* Queue Configuration messages. */
+    OFPT_QUEUE_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_QUEUE_GET_CONFIG_REPLY     /* Controller/switch message */
+
+};
+
+/* Header on all OpenFlow packets. */
+struct ofp_header {
+    uint8_t version;    /* OFP_VERSION. */
+    uint8_t type;       /* One of the OFPT_ constants. */
+    uint16_t length;    /* Length including this ofp_header. */
+    uint32_t xid;       /* Transaction id associated with this packet.
+                           Replies use the same id as was in the request
+                           to facilitate pairing. */
+};
+OFP_ASSERT(sizeof(struct ofp_header) == 8);
+
+/* OFPT_HELLO.  This message has an empty body, but implementations must
+ * ignore any data included in the body, to allow for future extensions. */
+struct ofp_hello {
+    struct ofp_header header;
+};
+
+#define OFP_DEFAULT_MISS_SEND_LEN   128
+
+enum ofp_config_flags {
+    /* Handling of IP fragments. */
+    OFPC_FRAG_NORMAL   = 0,  /* No special handling for fragments. */
+    OFPC_FRAG_DROP     = 1,  /* Drop fragments. */
+    OFPC_FRAG_REASM    = 2,  /* Reassemble (only if OFPC_IP_REASM set). */
+    OFPC_FRAG_MASK     = 3
+};
+
+/* Switch configuration. */
+struct ofp_switch_config {
+    struct ofp_header header;
+    uint16_t flags;             /* OFPC_* flags. */
+    uint16_t miss_send_len;     /* Max bytes of new flow that datapath should
+                                   send to the controller. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_config) == 12);
+
+/* Capabilities supported by the datapath. */
+enum ofp_capabilities {
+    OFPC_FLOW_STATS     = 1 << 0,  /* Flow statistics. */
+    OFPC_TABLE_STATS    = 1 << 1,  /* Table statistics. */
+    OFPC_PORT_STATS     = 1 << 2,  /* Port statistics. */
+    OFPC_STP            = 1 << 3,  /* 802.1d spanning tree. */
+    OFPC_RESERVED       = 1 << 4,  /* Reserved, must be zero. */
+    OFPC_IP_REASM       = 1 << 5,  /* Can reassemble IP fragments. */
+    OFPC_QUEUE_STATS    = 1 << 6,  /* Queue statistics. */
+    OFPC_ARP_MATCH_IP   = 1 << 7   /* Match IP addresses in ARP pkts. */
+};
+
+/* Flags to indicate behavior of the physical port.  These flags are
+ * used in ofp_phy_port to describe the current configuration.  They are
+ * used in the ofp_port_mod message to configure the port's behavior.
+ */
+enum ofp_port_config {
+    OFPPC_PORT_DOWN    = 1 << 0,  /* Port is administratively down. */
+
+    OFPPC_NO_STP       = 1 << 1,  /* Disable 802.1D spanning tree on port. */
+    OFPPC_NO_RECV      = 1 << 2,  /* Drop all packets except 802.1D spanning
+                                     tree packets. */
+    OFPPC_NO_RECV_STP  = 1 << 3,  /* Drop received 802.1D STP packets. */
+    OFPPC_NO_FLOOD     = 1 << 4,  /* Do not include this port when flooding. */
+    OFPPC_NO_FWD       = 1 << 5,  /* Drop packets forwarded to port. */
+    OFPPC_NO_PACKET_IN = 1 << 6   /* Do not send packet-in msgs for port. */
+};
+
+/* Current state of the physical port.  These are not configurable from
+ * the controller.
+ */
+enum ofp_port_state {
+    OFPPS_LINK_DOWN   = 1 << 0, /* No physical link present. */
+
+    /* The OFPPS_STP_* bits have no effect on switch operation.  The
+     * controller must adjust OFPPC_NO_RECV, OFPPC_NO_FWD, and
+     * OFPPC_NO_PACKET_IN appropriately to fully implement an 802.1D spanning
+     * tree. */
+    OFPPS_STP_LISTEN  = 0 << 8, /* Not learning or relaying frames. */
+    OFPPS_STP_LEARN   = 1 << 8, /* Learning but not relaying frames. */
+    OFPPS_STP_FORWARD = 2 << 8, /* Learning and relaying frames. */
+    OFPPS_STP_BLOCK   = 3 << 8, /* Not part of spanning tree. */
+    OFPPS_STP_MASK    = 3 << 8  /* Bit mask for OFPPS_STP_* values. */
+};
+
+/* Features of physical ports available in a datapath. */
+enum ofp_port_features {
+    OFPPF_10MB_HD    = 1 << 0,  /* 10 Mb half-duplex rate support. */
+    OFPPF_10MB_FD    = 1 << 1,  /* 10 Mb full-duplex rate support. */
+    OFPPF_100MB_HD   = 1 << 2,  /* 100 Mb half-duplex rate support. */
+    OFPPF_100MB_FD   = 1 << 3,  /* 100 Mb full-duplex rate support. */
+    OFPPF_1GB_HD     = 1 << 4,  /* 1 Gb half-duplex rate support. */
+    OFPPF_1GB_FD     = 1 << 5,  /* 1 Gb full-duplex rate support. */
+    OFPPF_10GB_FD    = 1 << 6,  /* 10 Gb full-duplex rate support. */
+    OFPPF_COPPER     = 1 << 7,  /* Copper medium. */
+    OFPPF_FIBER      = 1 << 8,  /* Fiber medium. */
+    OFPPF_AUTONEG    = 1 << 9,  /* Auto-negotiation. */
+    OFPPF_PAUSE      = 1 << 10, /* Pause. */
+    OFPPF_PAUSE_ASYM = 1 << 11  /* Asymmetric pause. */
+};
+
+/* Description of a physical port */
+struct ofp_phy_port {
+    uint16_t port_no;
+    uint8_t hw_addr[OFP_ETH_ALEN];
+    char name[OFP_MAX_PORT_NAME_LEN]; /* Null-terminated */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t state;         /* Bitmap of OFPPS_* flags. */
+
+    /* Bitmaps of OFPPF_* that describe features.  All bits zeroed if
+     * unsupported or unavailable. */
+    uint32_t curr;          /* Current features. */
+    uint32_t advertised;    /* Features being advertised by the port. */
+    uint32_t supported;     /* Features supported by the port. */
+    uint32_t peer;          /* Features advertised by peer. */
+};
+OFP_ASSERT(sizeof(struct ofp_phy_port) == 48);
+
+/* Switch features. */
+struct ofp_switch_features {
+    struct ofp_header header;
+    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
+                               a MAC address, while the upper 16-bits are
+                               implementer-defined. */
+
+    uint32_t n_buffers;     /* Max packets buffered at once. */
+
+    uint8_t n_tables;       /* Number of tables supported by datapath. */
+    uint8_t pad[3];         /* Align to 64-bits. */
+
+    /* Features. */
+    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
+    uint32_t actions;       /* Bitmap of supported "ofp_action_type"s. */
+
+    /* Port info.*/
+    struct ofp_phy_port ports[0];  /* Port definitions.  The number of ports
+                                      is inferred from the length field in
+                                      the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_features) == 32);
+
+/* What changed about the physical port */
+enum ofp_port_reason {
+    OFPPR_ADD,              /* The port was added. */
+    OFPPR_DELETE,           /* The port was removed. */
+    OFPPR_MODIFY            /* Some attribute of the port has changed. */
+};
+
+/* A physical port has changed in the datapath */
+struct ofp_port_status {
+    struct ofp_header header;
+    uint8_t reason;          /* One of OFPPR_*. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    struct ofp_phy_port desc;
+};
+OFP_ASSERT(sizeof(struct ofp_port_status) == 64);
+
+/* Modify behavior of the physical port */
+struct ofp_port_mod {
+    struct ofp_header header;
+    uint16_t port_no;
+    uint8_t hw_addr[OFP_ETH_ALEN]; /* The hardware address is not
+                                      configurable.  This is used to
+                                      sanity-check the request, so it must
+                                      be the same as returned in an
+                                      ofp_phy_port struct. */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t mask;          /* Bitmap of OFPPC_* flags to be changed. */
+
+    uint32_t advertise;     /* Bitmap of "ofp_port_features"s.  Zero all
+                               bits to prevent any action taking place. */
+    uint8_t pad[4];         /* Pad to 64-bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_mod) == 32);
+
+/* Why is this packet being sent to the controller? */
+enum ofp_packet_in_reason {
+    OFPR_NO_MATCH,          /* No matching flow. */
+    OFPR_ACTION             /* Action explicitly output to controller. */
+};
+
+/* Packet received on port (datapath -> controller). */
+struct ofp_packet_in {
+    struct ofp_header header;
+    uint32_t buffer_id;     /* ID assigned by datapath. */
+    uint16_t total_len;     /* Full length of frame. */
+    uint16_t in_port;       /* Port on which frame was received. */
+    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
+    uint8_t pad;
+    uint8_t data[0];        /* Ethernet frame, halfway through 32-bit word,
+                               so the IP header is 32-bit aligned.  The
+                               amount of data is inferred from the length
+                               field in the header.  Because of padding,
+                               offsetof(struct ofp_packet_in, data) ==
+                               sizeof(struct ofp_packet_in) - 2. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_in) == 20);
+
+enum ofp_action_type {
+    OFPAT_OUTPUT,           /* Output to switch port. */
+    OFPAT_SET_VLAN_VID,     /* Set the 802.1q VLAN id. */
+    OFPAT_SET_VLAN_PCP,     /* Set the 802.1q priority. */
+    OFPAT_STRIP_VLAN,       /* Strip the 802.1q header. */
+    OFPAT_SET_DL_SRC,       /* Ethernet source address. */
+    OFPAT_SET_DL_DST,       /* Ethernet destination address. */
+    OFPAT_SET_NW_SRC,       /* IP source address. */
+    OFPAT_SET_NW_DST,       /* IP destination address. */
+    OFPAT_SET_NW_TOS,       /* IP ToS (DSCP field, 6 bits). */
+    OFPAT_SET_TP_SRC,       /* TCP/UDP source port. */
+    OFPAT_SET_TP_DST,       /* TCP/UDP destination port. */
+    OFPAT_ENQUEUE,          /* Output to queue.  */
+    OFPAT_VENDOR = 0xffff
+};
+
+/* Action structure for OFPAT_OUTPUT, which sends packets out 'port'.
+ * When the 'port' is the OFPP_CONTROLLER, 'max_len' indicates the max
+ * number of bytes to send.  A 'max_len' of zero means no bytes of the
+ * packet should be sent.*/
+struct ofp_action_output {
+    uint16_t type;                  /* OFPAT_OUTPUT. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t port;                  /* Output port. */
+    uint16_t max_len;               /* Max length to send to controller. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_output) == 8);
+
+/* The VLAN id is 12 bits, so we can use the entire 16 bits to indicate
+ * special conditions.  All ones is used to match that no VLAN id was
+ * set. */
+#define OFP_VLAN_NONE      0xffff
+
+/* Action structure for OFPAT_SET_VLAN_VID. */
+struct ofp_action_vlan_vid {
+    uint16_t type;                  /* OFPAT_SET_VLAN_VID. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t vlan_vid;              /* VLAN id. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_vid) == 8);
+
+/* Action structure for OFPAT_SET_VLAN_PCP. */
+struct ofp_action_vlan_pcp {
+    uint16_t type;                  /* OFPAT_SET_VLAN_PCP. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t vlan_pcp;               /* VLAN priority. */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_pcp) == 8);
+
+/* Action structure for OFPAT_SET_DL_SRC/DST. */
+struct ofp_action_dl_addr {
+    uint16_t type;                  /* OFPAT_SET_DL_SRC/DST. */
+    uint16_t len;                   /* Length is 16. */
+    uint8_t dl_addr[OFP_ETH_ALEN];  /* Ethernet address. */
+    uint8_t pad[6];
+};
+OFP_ASSERT(sizeof(struct ofp_action_dl_addr) == 16);
+
+/* Action structure for OFPAT_SET_NW_SRC/DST. */
+struct ofp_action_nw_addr {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t nw_addr;               /* IP address. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_addr) == 8);
+
+/* Action structure for OFPAT_SET_TP_SRC/DST. */
+struct ofp_action_tp_port {
+    uint16_t type;                  /* OFPAT_SET_TP_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t tp_port;               /* TCP/UDP port. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_tp_port) == 8);
+
+/* Action structure for OFPAT_SET_NW_TOS. */
+struct ofp_action_nw_tos {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_tos;                 /* IP ToS (DSCP field, 6 bits). */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_tos) == 8);
+
+/* Action header for OFPAT_VENDOR. The rest of the body is vendor-defined. */
+struct ofp_action_vendor_header {
+    uint16_t type;                  /* OFPAT_VENDOR. */
+    uint16_t len;                   /* Length is a multiple of 8. */
+    uint32_t vendor;                /* Vendor ID, which takes the same form
+                                       as in "struct ofp_vendor_header". */
+};
+OFP_ASSERT(sizeof(struct ofp_action_vendor_header) == 8);
+
+/* Action header that is common to all actions.  The length includes the
+ * header and any padding used to make the action 64-bit aligned.
+ * NB: The length of an action *must* always be a multiple of eight. */
+struct ofp_action_header {
+    uint16_t type;                  /* One of OFPAT_*. */
+    uint16_t len;                   /* Length of action, including this
+                                       header.  This is the length of action,
+                                       including any padding to make it
+                                       64-bit aligned. */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_action_header) == 8);
+
+/* Send packet (controller -> datapath). */
+struct ofp_packet_out {
+    struct ofp_header header;
+    uint32_t buffer_id;           /* ID assigned by datapath (-1 if none). */
+    uint16_t in_port;             /* Packet's input port (OFPP_NONE if none). */
+    uint16_t actions_len;         /* Size of action array in bytes. */
+    struct ofp_action_header actions[0]; /* Actions. */
+    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
+                                     from the length field in the header.
+                                     (Only meaningful if buffer_id == -1.) */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_out) == 16);
+
+enum ofp_flow_mod_command {
+    OFPFC_ADD,              /* New flow. */
+    OFPFC_MODIFY,           /* Modify all matching flows. */
+    OFPFC_MODIFY_STRICT,    /* Modify entry strictly matching wildcards */
+    OFPFC_DELETE,           /* Delete all matching flows. */
+    OFPFC_DELETE_STRICT    /* Strictly match wildcards and priority. */
+};
+
+/* Flow wildcards. */
+enum ofp_flow_wildcards {
+    OFPFW_IN_PORT  = 1 << 0,  /* Switch input port. */
+    OFPFW_DL_VLAN  = 1 << 1,  /* VLAN id. */
+    OFPFW_DL_SRC   = 1 << 2,  /* Ethernet source address. */
+    OFPFW_DL_DST   = 1 << 3,  /* Ethernet destination address. */
+    OFPFW_DL_TYPE  = 1 << 4,  /* Ethernet frame type. */
+    OFPFW_NW_PROTO = 1 << 5,  /* IP protocol. */
+    OFPFW_TP_SRC   = 1 << 6,  /* TCP/UDP source port. */
+    OFPFW_TP_DST   = 1 << 7,  /* TCP/UDP destination port. */
+
+    /* IP source address wildcard bit count.  0 is exact match, 1 ignores the
+     * LSB, 2 ignores the 2 least-significant bits, ..., 32 and higher wildcard
+     * the entire field.  This is the *opposite* of the usual convention where
+     * e.g. /24 indicates that 8 bits (not 24 bits) are wildcarded. */
+    OFPFW_NW_SRC_SHIFT = 8,
+    OFPFW_NW_SRC_BITS = 6,
+    OFPFW_NW_SRC_MASK = ((1 << OFPFW_NW_SRC_BITS) - 1) << OFPFW_NW_SRC_SHIFT,
+    OFPFW_NW_SRC_ALL = 32 << OFPFW_NW_SRC_SHIFT,
+
+    /* IP destination address wildcard bit count.  Same format as source. */
+    OFPFW_NW_DST_SHIFT = 14,
+    OFPFW_NW_DST_BITS = 6,
+    OFPFW_NW_DST_MASK = ((1 << OFPFW_NW_DST_BITS) - 1) << OFPFW_NW_DST_SHIFT,
+    OFPFW_NW_DST_ALL = 32 << OFPFW_NW_DST_SHIFT,
+
+    OFPFW_DL_VLAN_PCP = 1 << 20,  /* VLAN priority. */
+    OFPFW_NW_TOS = 1 << 21,  /* IP ToS (DSCP field, 6 bits). */
+
+    /* Wildcard all fields. */
+    OFPFW_ALL = ((1 << 22) - 1)
+};
+
+/* The wildcards for ICMP type and code fields use the transport source
+ * and destination port fields, respectively. */
+#define OFPFW_ICMP_TYPE OFPFW_TP_SRC
+#define OFPFW_ICMP_CODE OFPFW_TP_DST
+
+/* Values below this cutoff are 802.3 packets and the two bytes
+ * following MAC addresses are used as a frame length.  Otherwise, the
+ * two bytes are used as the Ethernet type.
+ */
+#define OFP_DL_TYPE_ETH2_CUTOFF   0x0600
+
+/* Value of dl_type to indicate that the frame does not include an
+ * Ethernet type.
+ */
+#define OFP_DL_TYPE_NOT_ETH_TYPE  0x05ff
+
+/* The VLAN id is 12-bits, so we can use the entire 16 bits to indicate
+ * special conditions.  All ones indicates that no VLAN id was set.
+ */
+#define OFP_VLAN_NONE      0xffff
+
+/* Fields to match against flows */
+struct ofp_match {
+    uint32_t wildcards;        /* Wildcard fields. */
+    uint16_t in_port;          /* Input switch port. */
+    uint8_t dl_src[OFP_ETH_ALEN]; /* Ethernet source address. */
+    uint8_t dl_dst[OFP_ETH_ALEN]; /* Ethernet destination address. */
+    uint16_t dl_vlan;          /* Input VLAN id. */
+    uint8_t dl_vlan_pcp;       /* Input VLAN priority. */
+    uint8_t pad1[1];           /* Align to 64-bits */
+    uint16_t dl_type;          /* Ethernet frame type. */
+    uint8_t nw_tos;            /* IP ToS (actually DSCP field, 6 bits). */
+    uint8_t nw_proto;          /* IP protocol or lower 8 bits of
+                                * ARP opcode. */
+    uint8_t pad2[2];           /* Align to 64-bits */
+    uint32_t nw_src;           /* IP source address. */
+    uint32_t nw_dst;           /* IP destination address. */
+    uint16_t tp_src;           /* TCP/UDP source port. */
+    uint16_t tp_dst;           /* TCP/UDP destination port. */
+};
+OFP_ASSERT(sizeof(struct ofp_match) == 40);
+
+/* The match fields for ICMP type and code use the transport source and
+ * destination port fields, respectively. */
+#define icmp_type tp_src
+#define icmp_code tp_dst
+
+/* Value used in "idle_timeout" and "hard_timeout" to indicate that the entry
+ * is permanent. */
+#define OFP_FLOW_PERMANENT 0
+
+/* By default, choose a priority in the middle. */
+#define OFP_DEFAULT_PRIORITY 0x8000
+
+enum ofp_flow_mod_flags {
+    OFPFF_SEND_FLOW_REM = 1 << 0,  /* Send flow removed message when flow
+                                    * expires or is deleted. */
+    OFPFF_CHECK_OVERLAP = 1 << 1,  /* Check for overlapping entries first. */
+    OFPFF_EMERG         = 1 << 2   /* Remark this is for emergency. */
+};
+
+/* Flow setup and teardown (controller -> datapath). */
+struct ofp_flow_mod {
+    struct ofp_header header;
+    struct ofp_match match;      /* Fields to match */
+    uint64_t cookie;             /* Opaque controller-issued identifier. */
+
+    /* Flow actions. */
+    uint16_t command;             /* One of OFPFC_*. */
+    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
+    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
+    uint16_t priority;            /* Priority level of flow entry. */
+    uint32_t buffer_id;           /* Buffered packet to apply to (or -1).
+                                     Not meaningful for OFPFC_DELETE*. */
+    uint16_t out_port;            /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output port.  A value of OFPP_NONE
+                                     indicates no restriction. */
+    uint16_t flags;               /* One of OFPFF_*. */
+    struct ofp_action_header actions[0]; /* The action length is inferred
+                                            from the length field in the
+                                            header. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_mod) == 72);
+
+/* Why was this flow removed? */
+enum ofp_flow_removed_reason {
+    OFPRR_IDLE_TIMEOUT,         /* Flow idle time exceeded idle_timeout. */
+    OFPRR_HARD_TIMEOUT,         /* Time exceeded hard_timeout. */
+    OFPRR_DELETE                /* Evicted by a DELETE flow mod. */
+};
+
+/* Flow removed (datapath -> controller). */
+struct ofp_flow_removed {
+    struct ofp_header header;
+    struct ofp_match match;   /* Description of fields. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+
+    uint16_t priority;        /* Priority level of flow entry. */
+    uint8_t reason;           /* One of OFPRR_*. */
+    uint8_t pad[1];           /* Align to 32-bits. */
+
+    uint32_t duration_sec;    /* Time flow was alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow was alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t idle_timeout;    /* Idle timeout from original flow mod. */
+    uint8_t pad2[2];          /* Align to 64-bits. */
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+OFP_ASSERT(sizeof(struct ofp_flow_removed) == 88);
+
+/* Values for 'type' in ofp_error_message.  These values are immutable: they
+ * will not change in future versions of the protocol (although new values may
+ * be added). */
+enum ofp_error_type {
+    OFPET_HELLO_FAILED,         /* Hello protocol failed. */
+    OFPET_BAD_REQUEST,          /* Request was not understood. */
+    OFPET_BAD_ACTION,           /* Error in action description. */
+    OFPET_FLOW_MOD_FAILED,      /* Problem modifying flow entry. */
+    OFPET_PORT_MOD_FAILED,      /* Port mod request failed. */
+    OFPET_QUEUE_OP_FAILED       /* Queue operation failed. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_HELLO_FAILED.  'data' contains an
+ * ASCII text string that may give failure details. */
+enum ofp_hello_failed_code {
+    OFPHFC_INCOMPATIBLE,        /* No compatible version. */
+    OFPHFC_EPERM                /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_REQUEST.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_request_code {
+    OFPBRC_BAD_VERSION,         /* ofp_header.version not supported. */
+    OFPBRC_BAD_TYPE,            /* ofp_header.type not supported. */
+    OFPBRC_BAD_STAT,            /* ofp_stats_request.type not supported. */
+    OFPBRC_BAD_VENDOR,          /* Vendor not supported (in ofp_vendor_header
+                                 * or ofp_stats_request or ofp_stats_reply). */
+    OFPBRC_BAD_SUBTYPE,         /* Vendor subtype not supported. */
+    OFPBRC_EPERM,               /* Permissions error. */
+    OFPBRC_BAD_LEN,             /* Wrong request length for type. */
+    OFPBRC_BUFFER_EMPTY,        /* Specified buffer has already been used. */
+    OFPBRC_BUFFER_UNKNOWN       /* Specified buffer does not exist. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_ACTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_action_code {
+    OFPBAC_BAD_TYPE,           /* Unknown action type. */
+    OFPBAC_BAD_LEN,            /* Length problem in actions. */
+    OFPBAC_BAD_VENDOR,         /* Unknown vendor id specified. */
+    OFPBAC_BAD_VENDOR_TYPE,    /* Unknown action type for vendor id. */
+    OFPBAC_BAD_OUT_PORT,       /* Problem validating output action. */
+    OFPBAC_BAD_ARGUMENT,       /* Bad action argument. */
+    OFPBAC_EPERM,              /* Permissions error. */
+    OFPBAC_TOO_MANY,           /* Can't handle this many actions. */
+    OFPBAC_BAD_QUEUE           /* Problem validating output queue. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_FLOW_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_flow_mod_failed_code {
+    OFPFMFC_ALL_TABLES_FULL,    /* Flow not added because of full tables. */
+    OFPFMFC_OVERLAP,            /* Attempted to add overlapping flow with
+                                 * CHECK_OVERLAP flag set. */
+    OFPFMFC_EPERM,              /* Permissions error. */
+    OFPFMFC_BAD_EMERG_TIMEOUT,  /* Flow not added because of non-zero idle/hard
+                                 * timeout. */
+    OFPFMFC_BAD_COMMAND,        /* Unknown command. */
+    OFPFMFC_UNSUPPORTED         /* Unsupported action list - cannot process in
+                                 * the order specified. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_PORT_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_port_mod_failed_code {
+    OFPPMFC_BAD_PORT,            /* Specified port does not exist. */
+    OFPPMFC_BAD_HW_ADDR,         /* Specified hardware address is wrong. */
+};
+
+/* ofp_error msg 'code' values for OFPET_QUEUE_OP_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request */
+enum ofp_queue_op_failed_code {
+    OFPQOFC_BAD_PORT,           /* Invalid port (or port does not exist). */
+    OFPQOFC_BAD_QUEUE,          /* Queue does not exist. */
+    OFPQOFC_EPERM               /* Permissions error. */
+};
+
+/* OFPT_ERROR: Error message (datapath -> controller). */
+struct ofp_error_msg {
+    struct ofp_header header;
+
+    uint16_t type;
+    uint16_t code;
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_msg) == 12);
+
+enum ofp_stats_types {
+    /* Description of this OpenFlow switch.
+     * The request body is empty.
+     * The reply body is struct ofp_desc_stats. */
+    OFPST_DESC,
+
+    /* Individual flow statistics.
+     * The request body is struct ofp_flow_stats_request.
+     * The reply body is an array of struct ofp_flow_stats. */
+    OFPST_FLOW,
+
+    /* Aggregate flow statistics.
+     * The request body is struct ofp_aggregate_stats_request.
+     * The reply body is struct ofp_aggregate_stats_reply. */
+    OFPST_AGGREGATE,
+
+    /* Flow table statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_table_stats. */
+    OFPST_TABLE,
+
+    /* Physical port statistics.
+     * The request body is struct ofp_port_stats_request.
+     * The reply body is an array of struct ofp_port_stats. */
+    OFPST_PORT,
+
+    /* Queue statistics for a port
+     * The request body defines the port
+     * The reply body is an array of struct ofp_queue_stats */
+    OFPST_QUEUE,
+
+    /* Vendor extension.
+     * The request and reply bodies begin with a 32-bit vendor ID, which takes
+     * the same form as in "struct ofp_vendor_header".  The request and reply
+     * bodies are otherwise vendor-defined. */
+    OFPST_VENDOR = 0xffff
+};
+
+struct ofp_stats_request {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REQ_* flags (none yet defined). */
+    uint8_t body[0];            /* Body of the request. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_request) == 12);
+
+enum ofp_stats_reply_flags {
+    OFPSF_REPLY_MORE  = 1 << 0  /* More replies to follow. */
+};
+
+struct ofp_stats_reply {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REPLY_* flags. */
+    uint8_t body[0];            /* Body of the reply. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_reply) == 12);
+
+#define DESC_STR_LEN   256
+#define SERIAL_NUM_LEN 32
+/* Body of reply to OFPST_DESC request.  Each entry is a NULL-terminated
+ * ASCII string. */
+struct ofp_desc_stats {
+    char mfr_desc[DESC_STR_LEN];       /* Manufacturer description. */
+    char hw_desc[DESC_STR_LEN];        /* Hardware description. */
+    char sw_desc[DESC_STR_LEN];        /* Software description. */
+    char serial_num[SERIAL_NUM_LEN];   /* Serial number. */
+    char dp_desc[DESC_STR_LEN];        /* Human readable description of datapath. */
+};
+OFP_ASSERT(sizeof(struct ofp_desc_stats) == 1056);
+
+/* Body for ofp_stats_request of type OFPST_FLOW. */
+struct ofp_flow_stats_request {
+    struct ofp_match match;   /* Fields to match. */
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats),
+                                 0xff for all tables or 0xfe for emergency. */
+    uint8_t pad;              /* Align to 32 bits. */
+    uint16_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_NONE
+                                 indicates no restriction. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats_request) == 44);
+
+/* Body of reply to OFPST_FLOW request. */
+struct ofp_flow_stats {
+    uint16_t length;          /* Length of this entry. */
+    uint8_t table_id;         /* ID of table flow came from. */
+    uint8_t pad;
+    struct ofp_match match;   /* Description of fields. */
+    uint32_t duration_sec;    /* Time flow has been alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow has been alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t priority;        /* Priority of the entry. Only meaningful
+                                 when this is not an exact-match entry. */
+    uint16_t idle_timeout;    /* Number of seconds idle before expiration. */
+    uint16_t hard_timeout;    /* Number of seconds before expiration. */
+    uint8_t pad2[6];          /* Align to 64-bits. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+    uint64_t packet_count;    /* Number of packets in flow. */
+    uint64_t byte_count;      /* Number of bytes in flow. */
+    struct ofp_action_header actions[0]; /* Actions. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats) == 88);
+
+/* Body for ofp_stats_request of type OFPST_AGGREGATE. */
+struct ofp_aggregate_stats_request {
+    struct ofp_match match;   /* Fields to match. */
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats)
+                                 0xff for all tables or 0xfe for emergency. */
+    uint8_t pad;              /* Align to 32 bits. */
+    uint16_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_NONE
+                                 indicates no restriction. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_request) == 44);
+
+/* Body of reply to OFPST_AGGREGATE request. */
+struct ofp_aggregate_stats_reply {
+    uint64_t packet_count;    /* Number of packets in flows. */
+    uint64_t byte_count;      /* Number of bytes in flows. */
+    uint32_t flow_count;      /* Number of flows. */
+    uint8_t pad[4];           /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_reply) == 24);
+
+/* Body of reply to OFPST_TABLE request. */
+struct ofp_table_stats {
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[3];          /* Align to 32-bits. */
+    char name[OFP_MAX_TABLE_NAME_LEN];
+    uint32_t wildcards;      /* Bitmap of OFPFW_* wildcards that are
+                                supported by the table. */
+    uint32_t max_entries;    /* Max number of entries supported. */
+    uint32_t active_count;   /* Number of active entries. */
+    uint64_t lookup_count;   /* Number of packets looked up in table. */
+    uint64_t matched_count;  /* Number of packets that hit table. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_stats) == 64);
+
+/* Body for ofp_stats_request of type OFPST_PORT. */
+struct ofp_port_stats_request {
+    uint16_t port_no;        /* OFPST_PORT message must request statistics
+                              * either for a single port (specified in
+                              * port_no) or for all ports (if port_no ==
+                              * OFPP_NONE). */
+    uint8_t pad[6];
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats_request) == 8);
+
+/* Body of reply to OFPST_PORT request. If a counter is unsupported, set
+ * the field to all ones. */
+struct ofp_port_stats {
+    uint16_t port_no;
+    uint8_t pad[6];          /* Align to 64-bits. */
+    uint64_t rx_packets;     /* Number of received packets. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t rx_bytes;       /* Number of received bytes. */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t rx_dropped;     /* Number of packets dropped by RX. */
+    uint64_t tx_dropped;     /* Number of packets dropped by TX. */
+    uint64_t rx_errors;      /* Number of receive errors.  This is a super-set
+                                of more specific receive errors and should be
+                                greater than or equal to the sum of all
+                                rx_*_err values. */
+    uint64_t tx_errors;      /* Number of transmit errors.  This is a super-set
+                                of more specific transmit errors and should be
+                                greater than or equal to the sum of all
+                                tx_*_err values (none currently defined.) */
+    uint64_t rx_frame_err;   /* Number of frame alignment errors. */
+    uint64_t rx_over_err;    /* Number of packets with RX overrun. */
+    uint64_t rx_crc_err;     /* Number of CRC errors. */
+    uint64_t collisions;     /* Number of collisions. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats) == 104);
+
+/* Vendor extension. */
+struct ofp_vendor_header {
+    struct ofp_header header;   /* Type OFPT_VENDOR. */
+    uint32_t vendor;            /* Vendor ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by OpenFlow
+                                 *   consortium. */
+    /* Vendor-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_vendor_header) == 12);
+
+/* All ones is used to indicate all queues in a port (for stats retrieval). */
+#define OFPQ_ALL      0xffffffff
+
+/* Min rate > 1000 means not configured. */
+#define OFPQ_MIN_RATE_UNCFG      0xffff
+
+enum ofp_queue_properties {
+    OFPQT_NONE = 0,       /* No property defined for queue (default). */
+    OFPQT_MIN_RATE,       /* Minimum datarate guaranteed. */
+                          /* Other types should be added here
+                           * (i.e. max rate, precedence, etc). */
+};
+
+/* Common description for a queue. */
+struct ofp_queue_prop_header {
+    uint16_t property;    /* One of OFPQT_. */
+    uint16_t len;         /* Length of property, including this header. */
+    uint8_t pad[4];       /* 64-bit alignemnt. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_header) == 8);
+
+/* Min-Rate queue property description. */
+struct ofp_queue_prop_min_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MIN, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_min_rate) == 16);
+
+/* Full description for a queue. */
+struct ofp_packet_queue {
+    uint32_t queue_id;     /* id for the specific queue. */
+    uint16_t len;          /* Length in bytes of this queue desc. */
+    uint8_t pad[2];        /* 64-bit alignment. */
+    struct ofp_queue_prop_header properties[0]; /* List of properties. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_queue) == 8);
+
+/* Query for port queue configuration. */
+struct ofp_queue_get_config_request {
+    struct ofp_header header;
+    uint16_t port;         /* Port to be queried. Should refer
+                              to a valid physical port (i.e. < OFPP_MAX) */
+    uint8_t pad[2];        /* 32-bit alignment. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_request) == 12);
+
+/* Queue configuration for a given port. */
+struct ofp_queue_get_config_reply {
+    struct ofp_header header;
+    uint16_t port;
+    uint8_t pad[6];
+    struct ofp_packet_queue queues[0]; /* List of configured queues. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_reply) == 16);
+
+/* OFPAT_ENQUEUE action struct: send packets to given queue on port. */
+struct ofp_action_enqueue {
+    uint16_t type;            /* OFPAT_ENQUEUE. */
+    uint16_t len;             /* Len is 16. */
+    uint16_t port;            /* Port that queue belongs. Should
+                                 refer to a valid physical port
+                                 (i.e. < OFPP_MAX) or OFPP_IN_PORT. */
+    uint8_t pad[6];           /* Pad for 64-bit alignment. */
+    uint32_t queue_id;        /* Where to enqueue the packets. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_enqueue) == 16);
+
+struct ofp_queue_stats_request {
+    uint16_t port_no;        /* All ports if OFPT_ALL. */
+    uint8_t pad[2];          /* Align to 32-bits. */
+    uint32_t queue_id;       /* All queues if OFPQ_ALL. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats_request) == 8);
+
+struct ofp_queue_stats {
+    uint16_t port_no;
+    uint8_t pad[2];          /* Align to 32-bits. */
+    uint32_t queue_id;       /* Queue i.d */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t tx_errors;      /* Number of packets dropped due to overrun. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats) == 32);
+
+#endif /* openflow/openflow.h */
diff --git a/canonical/openflow.h-1.1 b/canonical/openflow.h-1.1
new file mode 100644
index 0000000..022c320
--- /dev/null
+++ b/canonical/openflow.h-1.1
@@ -0,0 +1,1413 @@
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford
+ * Junior University
+ *
+ * We are making the OpenFlow specification and associated documentation
+ * (Software) available for public use and benefit with the expectation
+ * that others will use, modify and enhance the Software and contribute
+ * those enhancements back to the community. However, since we would
+ * like to make the Software available for broadest use, with as few
+ * restrictions as possible permission is hereby granted, free of
+ * charge, to any person obtaining a copy of this Software to deal in
+ * the Software under the copyrights without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT.  IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * The name and trademarks of copyright holder(s) may NOT be used in
+ * advertising or publicity pertaining to the Software or any
+ * derivatives without specific, written prior permission.
+ */
+
+/* OpenFlow: protocol between controller and datapath. */
+
+#ifndef OPENFLOW_OPENFLOW_H
+#define OPENFLOW_OPENFLOW_H 1
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+#ifdef SWIG
+#define OFP_ASSERT(EXPR)        /* SWIG can't handle OFP_ASSERT. */
+#elif !defined(__cplusplus)
+/* Build-time assertion for use in a declaration context. */
+#define OFP_ASSERT(EXPR)                                                \
+        extern int (*build_assert(void))[ sizeof(struct {               \
+                    unsigned int build_assert_failed : (EXPR) ? 1 : -1; })]
+#else /* __cplusplus */
+#define OFP_ASSERT(_EXPR) typedef int build_assert_failed[(_EXPR) ? 1 : -1]
+#endif /* __cplusplus */
+
+#ifndef SWIG
+#define OFP_PACKED __attribute__((packed))
+#else
+#define OFP_PACKED              /* SWIG doesn't understand __attribute. */
+#endif
+
+/* Version number:
+ * Non-experimental versions released: 0x01
+ * Experimental versions released: 0x81 -- 0x99
+ */
+/* The most significant bit being set in the version field indicates an
+ * experimental OpenFlow version.
+ */
+#define OFP_VERSION   0x02
+
+#define OFP_MAX_TABLE_NAME_LEN 32
+#define OFP_MAX_PORT_NAME_LEN  16
+
+#define OFP_TCP_PORT  6633
+#define OFP_SSL_PORT  6633
+
+#define OFP_ETH_ALEN 6          /* Bytes in an Ethernet address. */
+
+/* Port numbering. Ports are numbered starting from 1. */
+enum ofp_port_no {
+    /* Maximum number of physical switch ports. */
+    OFPP_MAX        = 0xffffff00,
+
+    /* Fake output "ports". */
+    OFPP_IN_PORT    = 0xfffffff8,  /* Send the packet out the input port.  This
+                                      virtual port must be explicitly used
+                                      in order to send back out of the input
+                                      port. */
+    OFPP_TABLE      = 0xfffffff9,  /* Submit the packet to the first flow table
+                                      NB: This destination port can only be
+                                      used in packet-out messages. */
+    OFPP_NORMAL     = 0xfffffffa,  /* Process with normal L2/L3 switching. */
+    OFPP_FLOOD      = 0xfffffffb,  /* All physical ports in VLAN, except input
+                                      port and those blocked or link down. */
+    OFPP_ALL        = 0xfffffffc,  /* All physical ports except input port. */
+    OFPP_CONTROLLER = 0xfffffffd,  /* Send to controller. */
+    OFPP_LOCAL      = 0xfffffffe,  /* Local openflow "port". */
+    OFPP_ANY        = 0xffffffff   /* Wildcard port used only for flow mod
+                                      (delete) and flow stats requests. Selects
+                                      all flows regardless of output port
+                                      (including flows with no output port). */
+};
+
+enum ofp_type {
+    /* Immutable messages. */
+    OFPT_HELLO,               /* Symmetric message */
+    OFPT_ERROR,               /* Symmetric message */
+    OFPT_ECHO_REQUEST,        /* Symmetric message */
+    OFPT_ECHO_REPLY,          /* Symmetric message */
+    OFPT_EXPERIMENTER,        /* Symmetric message */
+
+    /* Switch configuration messages. */
+    OFPT_FEATURES_REQUEST,    /* Controller/switch message */
+    OFPT_FEATURES_REPLY,      /* Controller/switch message */
+    OFPT_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REPLY,    /* Controller/switch message */
+    OFPT_SET_CONFIG,          /* Controller/switch message */
+
+    /* Asynchronous messages. */
+    OFPT_PACKET_IN,           /* Async message */
+    OFPT_FLOW_REMOVED,        /* Async message */
+    OFPT_PORT_STATUS,         /* Async message */
+
+    /* Controller command messages. */
+    OFPT_PACKET_OUT,          /* Controller/switch message */
+    OFPT_FLOW_MOD,            /* Controller/switch message */
+    OFPT_GROUP_MOD,           /* Controller/switch message */
+    OFPT_PORT_MOD,            /* Controller/switch message */
+    OFPT_TABLE_MOD,           /* Controller/switch message */
+
+    /* Statistics messages. */
+    OFPT_STATS_REQUEST,       /* Controller/switch message */
+    OFPT_STATS_REPLY,         /* Controller/switch message */
+
+    /* Barrier messages. */
+    OFPT_BARRIER_REQUEST,     /* Controller/switch message */
+    OFPT_BARRIER_REPLY,       /* Controller/switch message */
+
+    /* Queue Configuration messages. */
+    OFPT_QUEUE_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_QUEUE_GET_CONFIG_REPLY,     /* Controller/switch message */
+};
+
+/* Header on all OpenFlow packets. */
+struct ofp_header {
+    uint8_t version;    /* OFP_VERSION. */
+    uint8_t type;       /* One of the OFPT_ constants. */
+    uint16_t length;    /* Length including this ofp_header. */
+    uint32_t xid;       /* Transaction id associated with this packet.
+                           Replies use the same id as was in the request
+                           to facilitate pairing. */
+};
+OFP_ASSERT(sizeof(struct ofp_header) == 8);
+
+/* OFPT_HELLO.  This message has an empty body, but implementations must
+ * ignore any data included in the body, to allow for future extensions. */
+struct ofp_hello {
+    struct ofp_header header;
+};
+
+#define OFP_DEFAULT_MISS_SEND_LEN   128
+
+enum ofp_config_flags {
+    /* Handling of IP fragments. */
+    OFPC_FRAG_NORMAL   = 0,       /* No special handling for fragments. */
+    OFPC_FRAG_DROP     = 1 << 0,  /* Drop fragments. */
+    OFPC_FRAG_REASM    = 1 << 1,  /* Reassemble (only if OFPC_IP_REASM set). */
+    OFPC_FRAG_MASK     = 3,
+
+    /* TTL processing - applicable for IP and MPLS packets */
+    OFPC_INVALID_TTL_TO_CONTROLLER = 1 << 2, /* Send packets with invalid TTL
+                                                ie. 0 or 1 to controller */
+};
+
+/* Switch configuration. */
+struct ofp_switch_config {
+    struct ofp_header header;
+    uint16_t flags;             /* OFPC_* flags. */
+    uint16_t miss_send_len;     /* Max bytes of new flow that datapath should
+                                   send to the controller. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_config) == 12);
+
+/* Flags to indicate behavior of the flow table for unmatched packets.
+   These flags are used in ofp_table_stats messages to describe the current
+   configuration and in ofp_table_mod messages to configure table behavior. */
+enum ofp_table_config {
+    OFPTC_TABLE_MISS_CONTROLLER = 0,      /* Send to controller. */
+    OFPTC_TABLE_MISS_CONTINUE   = 1 << 0, /* Continue to the next table in the
+                                             pipeline (OpenFlow 1.0
+                                             behavior). */
+    OFPTC_TABLE_MISS_DROP       = 1 << 1, /* Drop the packet. */
+    OFPTC_TABLE_MISS_MASK       = 3
+};
+
+/* Configure/Modify behavior of a flow table */
+struct ofp_table_mod {
+    struct ofp_header header;
+    uint8_t table_id;       /* ID of the table, 0xFF indicates all tables */
+    uint8_t pad[3];         /* Pad to 32 bits */
+    uint32_t config;        /* Bitmap of OFPTC_* flags */
+};
+OFP_ASSERT(sizeof(struct ofp_table_mod) == 16);
+
+/* Capabilities supported by the datapath. */
+enum ofp_capabilities {
+    OFPC_FLOW_STATS     = 1 << 0,  /* Flow statistics. */
+    OFPC_TABLE_STATS    = 1 << 1,  /* Table statistics. */
+    OFPC_PORT_STATS     = 1 << 2,  /* Port statistics. */
+    OFPC_GROUP_STATS    = 1 << 3,  /* Group statistics. */
+    OFPC_IP_REASM       = 1 << 5,  /* Can reassemble IP fragments. */
+    OFPC_QUEUE_STATS    = 1 << 6,  /* Queue statistics. */
+    OFPC_ARP_MATCH_IP   = 1 << 7   /* Match IP addresses in ARP pkts. */
+};
+
+/* Flags to indicate behavior of the physical port.  These flags are
+ * used in ofp_port to describe the current configuration.  They are
+ * used in the ofp_port_mod message to configure the port's behavior.
+ */
+enum ofp_port_config {
+    OFPPC_PORT_DOWN    = 1 << 0,  /* Port is administratively down. */
+
+    OFPPC_NO_RECV      = 1 << 2,  /* Drop all packets received by port. */
+    OFPPC_NO_FWD       = 1 << 5,  /* Drop packets forwarded to port. */
+    OFPPC_NO_PACKET_IN = 1 << 6   /* Do not send packet-in msgs for port. */
+};
+
+/* Current state of the physical port.  These are not configurable from
+ * the controller.
+ */
+enum ofp_port_state {
+    OFPPS_LINK_DOWN    = 1 << 0,  /* No physical link present. */
+    OFPPS_BLOCKED      = 1 << 1,  /* Port is blocked */
+    OFPPS_LIVE         = 1 << 2,  /* Live for Fast Failover Group. */
+};
+
+/* Features of ports available in a datapath. */
+enum ofp_port_features {
+    OFPPF_10MB_HD    = 1 << 0,  /* 10 Mb half-duplex rate support. */
+    OFPPF_10MB_FD    = 1 << 1,  /* 10 Mb full-duplex rate support. */
+    OFPPF_100MB_HD   = 1 << 2,  /* 100 Mb half-duplex rate support. */
+    OFPPF_100MB_FD   = 1 << 3,  /* 100 Mb full-duplex rate support. */
+    OFPPF_1GB_HD     = 1 << 4,  /* 1 Gb half-duplex rate support. */
+    OFPPF_1GB_FD     = 1 << 5,  /* 1 Gb full-duplex rate support. */
+    OFPPF_10GB_FD    = 1 << 6,  /* 10 Gb full-duplex rate support. */
+    OFPPF_40GB_FD    = 1 << 7,  /* 40 Gb full-duplex rate support. */
+    OFPPF_100GB_FD   = 1 << 8,  /* 100 Gb full-duplex rate support. */
+    OFPPF_1TB_FD     = 1 << 9,  /* 1 Tb full-duplex rate support. */
+    OFPPF_OTHER      = 1 << 10, /* Other rate, not in the list. */
+
+    OFPPF_COPPER     = 1 << 11, /* Copper medium. */
+    OFPPF_FIBER      = 1 << 12, /* Fiber medium. */
+    OFPPF_AUTONEG    = 1 << 13, /* Auto-negotiation. */
+    OFPPF_PAUSE      = 1 << 14, /* Pause. */
+    OFPPF_PAUSE_ASYM = 1 << 15  /* Asymmetric pause. */
+};
+
+/* Description of a port */
+struct ofp_port {
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN];
+    uint8_t pad2[2];                  /* Align to 64 bits. */
+    char name[OFP_MAX_PORT_NAME_LEN]; /* Null-terminated */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t state;         /* Bitmap of OFPPS_* flags. */
+
+    /* Bitmaps of OFPPF_* that describe features.  All bits zeroed if
+     * unsupported or unavailable. */
+    uint32_t curr;          /* Current features. */
+    uint32_t advertised;    /* Features being advertised by the port. */
+    uint32_t supported;     /* Features supported by the port. */
+    uint32_t peer;          /* Features advertised by peer. */
+
+    uint32_t curr_speed;    /* Current port bitrate in kbps. */
+    uint32_t max_speed;     /* Max port bitrate in kbps */
+};
+OFP_ASSERT(sizeof(struct ofp_port) == 64);
+
+/* Switch features. */
+struct ofp_switch_features {
+    struct ofp_header header;
+    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
+                               a MAC address, while the upper 16-bits are
+                               implementer-defined. */
+
+    uint32_t n_buffers;     /* Max packets buffered at once. */
+
+    uint8_t n_tables;       /* Number of tables supported by datapath. */
+    uint8_t pad[3];         /* Align to 64-bits. */
+
+    /* Features. */
+    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
+    uint32_t reserved;
+
+    /* Port info.*/
+    struct ofp_port ports[0];  /* Port definitions.  The number of ports
+                                  is inferred from the length field in
+                                  the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_features) == 32);
+
+/* What changed about the physical port */
+enum ofp_port_reason {
+    OFPPR_ADD,              /* The port was added. */
+    OFPPR_DELETE,           /* The port was removed. */
+    OFPPR_MODIFY            /* Some attribute of the port has changed. */
+};
+
+/* A physical port has changed in the datapath */
+struct ofp_port_status {
+    struct ofp_header header;
+    uint8_t reason;          /* One of OFPPR_*. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    struct ofp_port desc;
+};
+OFP_ASSERT(sizeof(struct ofp_port_status) == 80);
+
+/* Modify behavior of the physical port */
+struct ofp_port_mod {
+    struct ofp_header header;
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN]; /* The hardware address is not
+                                      configurable.  This is used to
+                                      sanity-check the request, so it must
+                                      be the same as returned in an
+                                      ofp_port struct. */
+    uint8_t pad2[2];        /* Pad to 64 bits. */
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t mask;          /* Bitmap of OFPPC_* flags to be changed. */
+
+    uint32_t advertise;     /* Bitmap of OFPPF_*.  Zero all bits to prevent
+                               any action taking place. */
+    uint8_t pad3[4];        /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_mod) == 40);
+
+/* Why is this packet being sent to the controller? */
+enum ofp_packet_in_reason {
+    OFPR_NO_MATCH,          /* No matching flow. */
+    OFPR_ACTION             /* Action explicitly output to controller. */
+};
+
+/* Packet received on port (datapath -> controller). */
+struct ofp_packet_in {
+    struct ofp_header header;
+    uint32_t buffer_id;     /* ID assigned by datapath. */
+    uint32_t in_port;       /* Port on which frame was received. */
+    uint32_t in_phy_port;   /* Physical Port on which frame was received. */
+    uint16_t total_len;     /* Full length of frame. */
+    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
+    uint8_t table_id;       /* ID of the table that was looked up */
+    uint8_t data[0];        /* Ethernet frame, halfway through 32-bit word,
+                               so the IP header is 32-bit aligned.  The
+                               amount of data is inferred from the length
+                               field in the header.  Because of padding,
+                               offsetof(struct ofp_packet_in, data) ==
+                               sizeof(struct ofp_packet_in) - 2. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_in) == 24);
+
+enum ofp_action_type {
+    OFPAT_OUTPUT,           /* Output to switch port. */
+    OFPAT_SET_VLAN_VID,     /* Set the 802.1q VLAN id. */
+    OFPAT_SET_VLAN_PCP,     /* Set the 802.1q priority. */
+    OFPAT_SET_DL_SRC,       /* Ethernet source address. */
+    OFPAT_SET_DL_DST,       /* Ethernet destination address. */
+    OFPAT_SET_NW_SRC,       /* IP source address. */
+    OFPAT_SET_NW_DST,       /* IP destination address. */
+    OFPAT_SET_NW_TOS,       /* IP ToS (DSCP field, 6 bits). */
+    OFPAT_SET_NW_ECN,       /* IP ECN (2 bits). */
+    OFPAT_SET_TP_SRC,       /* TCP/UDP/SCTP source port. */
+    OFPAT_SET_TP_DST,       /* TCP/UDP/SCTP destination port. */
+    OFPAT_COPY_TTL_OUT,     /* Copy TTL "outwards" -- from next-to-outermost to
+                               outermost */
+    OFPAT_COPY_TTL_IN,      /* Copy TTL "inwards" -- from outermost to
+                               next-to-outermost */
+    OFPAT_SET_MPLS_LABEL,   /* MPLS label */
+    OFPAT_SET_MPLS_TC,      /* MPLS TC */
+    OFPAT_SET_MPLS_TTL,     /* MPLS TTL */
+    OFPAT_DEC_MPLS_TTL,     /* Decrement MPLS TTL */
+
+    OFPAT_PUSH_VLAN,        /* Push a new VLAN tag */
+    OFPAT_POP_VLAN,         /* Pop the outer VLAN tag */
+    OFPAT_PUSH_MPLS,        /* Push a new MPLS tag */
+    OFPAT_POP_MPLS,         /* Pop the outer MPLS tag */
+    OFPAT_SET_QUEUE,        /* Set queue id when outputting to a port */
+    OFPAT_GROUP,            /* Apply group. */
+    OFPAT_SET_NW_TTL,       /* IP TTL. */
+    OFPAT_DEC_NW_TTL,       /* Decrement IP TTL. */
+    OFPAT_EXPERIMENTER = 0xffff
+};
+
+/* Action structure for OFPAT_OUTPUT, which sends packets out 'port'.
+ * When the 'port' is the OFPP_CONTROLLER, 'max_len' indicates the max
+ * number of bytes to send.  A 'max_len' of zero means no bytes of the
+ * packet should be sent.*/
+struct ofp_action_output {
+    uint16_t type;                  /* OFPAT_OUTPUT. */
+    uint16_t len;                   /* Length is 16. */
+    uint32_t port;                  /* Output port. */
+    uint16_t max_len;               /* Max length to send to controller. */
+    uint8_t pad[6];                 /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_output) == 16);
+
+/* Action structure for OFPAT_SET_VLAN_VID. */
+struct ofp_action_vlan_vid {
+    uint16_t type;                  /* OFPAT_SET_VLAN_VID. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t vlan_vid;              /* VLAN id. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_vid) == 8);
+
+/* Action structure for OFPAT_SET_VLAN_PCP. */
+struct ofp_action_vlan_pcp {
+    uint16_t type;                  /* OFPAT_SET_VLAN_PCP. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t vlan_pcp;               /* VLAN priority. */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_pcp) == 8);
+
+/* Action structure for OFPAT_SET_DL_SRC/DST. */
+struct ofp_action_dl_addr {
+    uint16_t type;                  /* OFPAT_SET_DL_SRC/DST. */
+    uint16_t len;                   /* Length is 16. */
+    uint8_t dl_addr[OFP_ETH_ALEN];  /* Ethernet address. */
+    uint8_t pad[6];
+};
+OFP_ASSERT(sizeof(struct ofp_action_dl_addr) == 16);
+
+/* Action structure for OFPAT_SET_NW_SRC/DST. */
+struct ofp_action_nw_addr {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t nw_addr;               /* IP address. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_addr) == 8);
+
+/* Action structure for OFPAT_SET_TP_SRC/DST. */
+struct ofp_action_tp_port {
+    uint16_t type;                  /* OFPAT_SET_TP_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t tp_port;               /* TCP/UDP/SCTP port. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_tp_port) == 8);
+
+/* Action structure for OFPAT_SET_NW_TOS. */
+struct ofp_action_nw_tos {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_tos;                 /* IP ToS (DSCP field, 6 bits). */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_tos) == 8);
+
+/* Action structure for OFPAT_SET_NW_ECN. */
+struct ofp_action_nw_ecn {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_ecn;                 /* IP ECN (2 bits). */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_ecn) == 8);
+
+/* Action structure for OFPAT_SET_MPLS_LABEL. */
+struct ofp_action_mpls_label {
+    uint16_t type;                  /* OFPAT_SET_MPLS_LABEL. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t mpls_label;            /* MPLS label */
+};
+OFP_ASSERT(sizeof(struct ofp_action_mpls_label) == 8);
+
+/* Action structure for OFPAT_SET_MPLS_TC. */
+struct ofp_action_mpls_tc {
+    uint16_t type;                  /* OFPAT_SET_MPLS_TC. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t mpls_tc;                /* MPLS TC */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_mpls_tc) == 8);
+
+/* Action structure for OFPAT_SET_MPLS_TTL. */
+struct ofp_action_mpls_ttl {
+    uint16_t type;                  /* OFPAT_SET_MPLS_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t mpls_ttl;               /* MPLS TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_mpls_ttl) == 8);
+
+/* Action structure for OFPAT_PUSH_VLAN/MPLS. */
+struct ofp_action_push {
+    uint16_t type;                  /* OFPAT_PUSH_VLAN/MPLS. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_push) == 8);
+
+/* Action structure for OFPAT_POP_MPLS. */
+struct ofp_action_pop_mpls {
+    uint16_t type;                  /* OFPAT_POP_MPLS. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_pop_mpls) == 8);
+
+/* Action structure for OFPAT_GROUP. */
+struct ofp_action_group {
+    uint16_t type;                  /* OFPAT_GROUP. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t group_id;              /* Group identifier. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_group) == 8);
+
+/* Action structure for OFPAT_SET_NW_TTL. */
+struct ofp_action_nw_ttl {
+    uint16_t type;                  /* OFPAT_SET_NW_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_ttl;                 /* IP TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_ttl) == 8);
+
+/* Action header for OFPAT_EXPERIMENTER.
+ * The rest of the body is experimenter-defined. */
+struct ofp_action_experimenter_header {
+    uint16_t type;                  /* OFPAT_EXPERIMENTER. */
+    uint16_t len;                   /* Length is a multiple of 8. */
+    uint32_t experimenter;          /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_experimenter_header) == 8);
+
+/* Action header that is common to all actions.  The length includes the
+ * header and any padding used to make the action 64-bit aligned.
+ * NB: The length of an action *must* always be a multiple of eight. */
+struct ofp_action_header {
+    uint16_t type;                  /* One of OFPAT_*. */
+    uint16_t len;                   /* Length of action, including this
+                                       header.  This is the length of action,
+                                       including any padding to make it
+                                       64-bit aligned. */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_action_header) == 8);
+
+/* Send packet (controller -> datapath). */
+struct ofp_packet_out {
+    struct ofp_header header;
+    uint32_t buffer_id;           /* ID assigned by datapath (-1 if none). */
+    uint32_t in_port;             /* Packet's input port or OFPP_CONTROLLER. */
+    uint16_t actions_len;         /* Size of action array in bytes. */
+    uint8_t pad[6];
+    struct ofp_action_header actions[0]; /* Action list. */
+    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
+                                     from the length field in the header.
+                                     (Only meaningful if buffer_id == -1.) */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_out) == 24);
+
+enum ofp_flow_mod_command {
+    OFPFC_ADD,              /* New flow. */
+    OFPFC_MODIFY,           /* Modify all matching flows. */
+    OFPFC_MODIFY_STRICT,    /* Modify entry strictly matching wildcards and
+                               priority. */
+    OFPFC_DELETE,           /* Delete all matching flows. */
+    OFPFC_DELETE_STRICT     /* Delete entry strictly matching wildcards and
+                               priority. */
+};
+
+/* Group commands */
+enum ofp_group_mod_command {
+    OFPGC_ADD,              /* New group. */
+    OFPGC_MODIFY,           /* Modify all matching groups. */
+    OFPGC_DELETE,           /* Delete all matching groups. */
+};
+
+/* Flow wildcards. */
+enum ofp_flow_wildcards {
+    OFPFW_IN_PORT     = 1 << 0,  /* Switch input port. */
+    OFPFW_DL_VLAN     = 1 << 1,  /* VLAN id. */
+    OFPFW_DL_VLAN_PCP = 1 << 2,  /* VLAN priority. */
+    OFPFW_DL_TYPE     = 1 << 3,  /* Ethernet frame type. */
+    OFPFW_NW_TOS      = 1 << 4,  /* IP ToS (DSCP field, 6 bits). */
+    OFPFW_NW_PROTO    = 1 << 5,  /* IP protocol. */
+    OFPFW_TP_SRC      = 1 << 6,  /* TCP/UDP/SCTP source port. */
+    OFPFW_TP_DST      = 1 << 7,  /* TCP/UDP/SCTP destination port. */
+    OFPFW_MPLS_LABEL  = 1 << 8,  /* MPLS label. */
+    OFPFW_MPLS_TC     = 1 << 9,  /* MPLS TC. */
+
+    /* Wildcard all fields. */
+    OFPFW_ALL           = ((1 << 10) - 1)
+};
+
+/* The wildcards for ICMP type and code fields use the transport source
+ * and destination port fields, respectively. */
+#define OFPFW_ICMP_TYPE OFPFW_TP_SRC
+#define OFPFW_ICMP_CODE OFPFW_TP_DST
+
+/* Values below this cutoff are 802.3 packets and the two bytes
+ * following MAC addresses are used as a frame length.  Otherwise, the
+ * two bytes are used as the Ethernet type.
+ */
+#define OFP_DL_TYPE_ETH2_CUTOFF   0x0600
+
+/* Value of dl_type to indicate that the frame does not include an
+ * Ethernet type.
+ */
+#define OFP_DL_TYPE_NOT_ETH_TYPE  0x05ff
+
+/* The VLAN id is 12-bits, so we can use the entire 16 bits to indicate
+ * special conditions.
+ */
+enum ofp_vlan_id {
+    OFPVID_ANY  = 0xfffe, /* Indicate that a VLAN id is set but don't care
+                             about it's value. Note: only valid when specifying
+                             the VLAN id in a match */
+    OFPVID_NONE = 0xffff, /* No VLAN id was set. */
+};
+/* Define for compatibility */
+#define OFP_VLAN_NONE      OFPVID_NONE
+
+/* The match type indicates the match structure (set of fields that compose the
+ * match) in use. The match type is placed in the type field at the beginning
+ * of all match structures. The "standard" type corresponds to ofp_match and
+ * must be supported by all OpenFlow switches. Extensions that define other
+ * match types may be published on the OpenFlow wiki. Support for extensions is
+ * optional.
+ */
+enum ofp_match_type {
+    OFPMT_STANDARD,           /* The match fields defined in the ofp_match
+                                 structure apply */
+};
+
+/* Size/length of STANDARD match */
+#define OFPMT_STANDARD_LENGTH   88
+
+/* Fields to match against flows */
+struct ofp_match {
+    uint16_t type;             /* One of OFPMT_* */
+    uint16_t length;           /* Length of ofp_match */
+    uint32_t in_port;          /* Input switch port. */
+    uint32_t wildcards;        /* Wildcard fields. */
+    uint8_t dl_src[OFP_ETH_ALEN]; /* Ethernet source address. */
+    uint8_t dl_src_mask[OFP_ETH_ALEN]; /* Ethernet source address mask. */
+    uint8_t dl_dst[OFP_ETH_ALEN]; /* Ethernet destination address. */
+    uint8_t dl_dst_mask[OFP_ETH_ALEN]; /* Ethernet destination address mask. */
+    uint16_t dl_vlan;          /* Input VLAN id. */
+    uint8_t dl_vlan_pcp;       /* Input VLAN priority. */
+    uint8_t pad1[1];           /* Align to 32-bits */
+    uint16_t dl_type;          /* Ethernet frame type. */
+    uint8_t nw_tos;            /* IP ToS (actually DSCP field, 6 bits). */
+    uint8_t nw_proto;          /* IP protocol or lower 8 bits of
+                                * ARP opcode. */
+    uint32_t nw_src;           /* IP source address. */
+    uint32_t nw_src_mask;      /* IP source address mask. */
+    uint32_t nw_dst;           /* IP destination address. */
+    uint32_t nw_dst_mask;      /* IP destination address mask. */
+    uint16_t tp_src;           /* TCP/UDP/SCTP source port. */
+    uint16_t tp_dst;           /* TCP/UDP/SCTP destination port. */
+    uint32_t mpls_label;       /* MPLS label. */
+    uint8_t mpls_tc;           /* MPLS TC. */
+    uint8_t pad2[3];           /* Align to 64-bits */
+    uint64_t metadata;         /* Metadata passed between tables. */
+    uint64_t metadata_mask;    /* Mask for metadata. */
+};
+OFP_ASSERT(sizeof(struct ofp_match) == OFPMT_STANDARD_LENGTH);
+
+/* The match fields for ICMP type and code use the transport source and
+ * destination port fields, respectively. */
+#define icmp_type tp_src
+#define icmp_code tp_dst
+
+/* Value used in "idle_timeout" and "hard_timeout" to indicate that the entry
+ * is permanent. */
+#define OFP_FLOW_PERMANENT 0
+
+/* By default, choose a priority in the middle. */
+#define OFP_DEFAULT_PRIORITY 0x8000
+
+enum ofp_instruction_type {
+    OFPIT_GOTO_TABLE = 1,       /* Setup the next table in the lookup
+                                   pipeline */
+    OFPIT_WRITE_METADATA = 2,   /* Setup the metadata field for use later in
+                                   pipeline */
+    OFPIT_WRITE_ACTIONS = 3,    /* Write the action(s) onto the datapath action
+                                   set */
+    OFPIT_APPLY_ACTIONS = 4,    /* Applies the action(s) immediately */
+    OFPIT_CLEAR_ACTIONS = 5,    /* Clears all actions from the datapath
+                                   action set */
+
+    OFPIT_EXPERIMENTER = 0xFFFF  /* Experimenter instruction */
+};
+
+/* Generic ofp_instruction structure */
+struct ofp_instruction {
+    uint16_t type;                /* Instruction type */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t pad[4];               /* Align to 64-bits */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction) == 8);
+
+/* Instruction structure for OFPIT_GOTO_TABLE */
+struct ofp_instruction_goto_table {
+    uint16_t type;                /* OFPIT_GOTO_TABLE */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t table_id;             /* Set next table in the lookup pipeline */
+    uint8_t pad[3];               /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_goto_table) == 8);
+
+/* Instruction structure for OFPIT_WRITE_METADATA */
+struct ofp_instruction_write_metadata {
+    uint16_t type;                /* OFPIT_WRITE_METADATA */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t pad[4];               /* Align to 64-bits */
+    uint64_t metadata;            /* Metadata value to write */
+    uint64_t metadata_mask;       /* Metadata write bitmask */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_write_metadata) == 24);
+
+/* Instruction structure for OFPIT_WRITE/APPLY/CLEAR_ACTIONS */
+struct ofp_instruction_actions {
+    uint16_t type;              /* One of OFPIT_*_ACTIONS */
+    uint16_t len;               /* Length of this struct in bytes. */
+    uint8_t pad[4];             /* Align to 64-bits */
+    struct ofp_action_header actions[0];  /* Actions associated with
+                                             OFPIT_WRITE_ACTIONS and
+                                             OFPIT_APPLY_ACTIONS */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_actions) == 8);
+
+/* Instruction structure for experimental instructions */
+struct ofp_instruction_experimenter {
+    uint16_t type;		/* OFPIT_EXPERIMENTER */
+    uint16_t len;               /* Length of this struct in bytes */
+    uint32_t experimenter;      /* Experimenter ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by OpenFlow
+                                 *   consortium. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_experimenter) == 8);
+
+enum ofp_flow_mod_flags {
+    OFPFF_SEND_FLOW_REM = 1 << 0,  /* Send flow removed message when flow
+                                    * expires or is deleted. */
+    OFPFF_CHECK_OVERLAP = 1 << 1   /* Check for overlapping entries first. */
+};
+
+/* Flow setup and teardown (controller -> datapath). */
+struct ofp_flow_mod {
+    struct ofp_header header;
+    uint64_t cookie;             /* Opaque controller-issued identifier. */
+    uint64_t cookie_mask;        /* Mask used to restrict the cookie bits
+                                    that must match when the command is
+                                    OFPFC_MODIFY* or OFPFC_DELETE*. A value
+                                    of 0 indicates no restriction. */
+
+    /* Flow actions. */
+    uint8_t table_id;             /* ID of the table to put the flow in */
+    uint8_t command;              /* One of OFPFC_*. */
+    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
+    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
+    uint16_t priority;            /* Priority level of flow entry. */
+    uint32_t buffer_id;           /* Buffered packet to apply to (or -1).
+                                     Not meaningful for OFPFC_DELETE*. */
+    uint32_t out_port;            /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output port.  A value of OFPP_ANY
+                                     indicates no restriction. */
+    uint32_t out_group;           /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output group.  A value of OFPG_ANY
+                                     indicates no restriction. */
+    uint16_t flags;               /* One of OFPFF_*. */
+    uint8_t pad[2];
+    struct ofp_match match;       /* Fields to match */
+    struct ofp_instruction instructions[0]; /* Instruction set */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_mod) == 136);
+
+/* Group numbering. Groups can use any number up to OFPG_MAX. */
+enum ofp_group {
+    /* Last usable group number. */
+    OFPG_MAX        = 0xffffff00,
+
+    /* Fake groups. */
+    OFPG_ALL        = 0xfffffffc,  /* Represents all groups for group delete
+                                      commands. */
+    OFPG_ANY        = 0xffffffff   /* Wildcard group used only for flow stats
+                                      requests. Selects all flows regardless of
+                                      group (including flows with no group).
+                                      */
+};
+
+/* Bucket for use in groups. */
+struct ofp_bucket {
+    uint16_t len;                   /* Length the bucket in bytes, including
+                                       this header and any padding to make it
+                                       64-bit aligned. */
+    uint16_t weight;                /* Relative weight of bucket.  Only
+                                       defined for select groups. */
+    uint32_t watch_port;            /* Port whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint32_t watch_group;           /* Group whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint8_t pad[4];
+    struct ofp_action_header actions[0]; /* The action length is inferred
+                                           from the length field in the
+                                           header. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket) == 16);
+
+/* Group setup and teardown (controller -> datapath). */
+struct ofp_group_mod {
+    struct ofp_header header;
+    uint16_t command;             /* One of OFPGC_*. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0]; /* The bucket length is inferred from the
+                                     length field in the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_mod) == 16);
+
+/* Group types.  Values in the range [128, 255] are reserved for experimental
+ * use. */
+enum ofp_group_type {
+    OFPGT_ALL,      /* All (multicast/broadcast) group.  */
+    OFPGT_SELECT,   /* Select group. */
+    OFPGT_INDIRECT, /* Indirect group. */
+    OFPGT_FF        /* Fast failover group. */
+};
+
+/* Why was this flow removed? */
+enum ofp_flow_removed_reason {
+    OFPRR_IDLE_TIMEOUT,         /* Flow idle time exceeded idle_timeout. */
+    OFPRR_HARD_TIMEOUT,         /* Time exceeded hard_timeout. */
+    OFPRR_DELETE,               /* Evicted by a DELETE flow mod. */
+    OFPRR_GROUP_DELETE          /* Group was removed. */
+};
+
+/* Flow removed (datapath -> controller). */
+struct ofp_flow_removed {
+    struct ofp_header header;
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+
+    uint16_t priority;        /* Priority level of flow entry. */
+    uint8_t reason;           /* One of OFPRR_*. */
+    uint8_t table_id;         /* ID of the table */
+
+    uint32_t duration_sec;    /* Time flow was alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow was alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t idle_timeout;    /* Idle timeout from original flow mod. */
+    uint8_t pad2[2];          /* Align to 64-bits. */
+    uint64_t packet_count;
+    uint64_t byte_count;
+    struct ofp_match match;   /* Description of fields. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_removed) == 136);
+
+/* Values for 'type' in ofp_error_message.  These values are immutable: they
+ * will not change in future versions of the protocol (although new values may
+ * be added). */
+enum ofp_error_type {
+    OFPET_HELLO_FAILED,         /* Hello protocol failed. */
+    OFPET_BAD_REQUEST,          /* Request was not understood. */
+    OFPET_BAD_ACTION,           /* Error in action description. */
+    OFPET_BAD_INSTRUCTION,      /* Error in instruction list. */
+    OFPET_BAD_MATCH,            /* Error in match. */
+    OFPET_FLOW_MOD_FAILED,      /* Problem modifying flow entry. */
+    OFPET_GROUP_MOD_FAILED,     /* Problem modifying group entry. */
+    OFPET_PORT_MOD_FAILED,      /* Port mod request failed. */
+    OFPET_TABLE_MOD_FAILED,     /* Table mod request failed. */
+    OFPET_QUEUE_OP_FAILED,      /* Queue operation failed. */
+    OFPET_SWITCH_CONFIG_FAILED, /* Switch config request failed. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_HELLO_FAILED.  'data' contains an
+ * ASCII text string that may give failure details. */
+enum ofp_hello_failed_code {
+    OFPHFC_INCOMPATIBLE,        /* No compatible version. */
+    OFPHFC_EPERM                /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_REQUEST.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_request_code {
+    OFPBRC_BAD_VERSION,         /* ofp_header.version not supported. */
+    OFPBRC_BAD_TYPE,            /* ofp_header.type not supported. */
+    OFPBRC_BAD_STAT,            /* ofp_stats_request.type not supported. */
+    OFPBRC_BAD_EXPERIMENTER,    /* Experimenter id not supported
+                                 * (in ofp_experimenter_header
+                                 * or ofp_stats_request or ofp_stats_reply). */
+    OFPBRC_BAD_SUBTYPE,         /* Experimenter subtype not supported. */
+    OFPBRC_EPERM,               /* Permissions error. */
+    OFPBRC_BAD_LEN,             /* Wrong request length for type. */
+    OFPBRC_BUFFER_EMPTY,        /* Specified buffer has already been used. */
+    OFPBRC_BUFFER_UNKNOWN,      /* Specified buffer does not exist. */
+    OFPBRC_BAD_TABLE_ID         /* Specified table-id invalid or does not
+                                 * exist. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_ACTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_action_code {
+    OFPBAC_BAD_TYPE,           /* Unknown action type. */
+    OFPBAC_BAD_LEN,            /* Length problem in actions. */
+    OFPBAC_BAD_EXPERIMENTER,   /* Unknown experimenter id specified. */
+    OFPBAC_BAD_EXPERIMENTER_TYPE, /* Unknown action type for experimenter id. */
+    OFPBAC_BAD_OUT_PORT,       /* Problem validating output port. */
+    OFPBAC_BAD_ARGUMENT,       /* Bad action argument. */
+    OFPBAC_EPERM,              /* Permissions error. */
+    OFPBAC_TOO_MANY,           /* Can't handle this many actions. */
+    OFPBAC_BAD_QUEUE,          /* Problem validating output queue. */
+    OFPBAC_BAD_OUT_GROUP,      /* Invalid group id in forward action. */
+    OFPBAC_MATCH_INCONSISTENT, /* Action can't apply for this match. */
+    OFPBAC_UNSUPPORTED_ORDER,  /* Action order is unsupported for the action
+				  list in an Apply-Actions instruction */
+    OFPBAC_BAD_TAG,            /* Actions uses an unsupported
+                                  tag/encap. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_INSTRUCTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_instruction_code {
+    OFPBIC_UNKNOWN_INST,       /* Unknown instruction. */
+    OFPBIC_UNSUP_INST,         /* Switch or table does not support the
+                                  instruction. */
+    OFPBIC_BAD_TABLE_ID,       /* Invalid Table-ID specified. */
+    OFPBIC_UNSUP_METADATA,     /* Metadata value unsupported by datapath. */
+    OFPBIC_UNSUP_METADATA_MASK,/* Metadata mask value unsupported by
+                                  datapath. */
+    OFPBIC_UNSUP_EXP_INST,     /* Specific experimenter instruction
+                                  unsupported. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_MATCH.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_match_code {
+    OFPBMC_BAD_TYPE,            /* Unsupported match type specified by the
+                                   match */
+    OFPBMC_BAD_LEN,             /* Length problem in match. */
+    OFPBMC_BAD_TAG,             /* Match uses an unsupported tag/encap. */
+    OFPBMC_BAD_DL_ADDR_MASK,    /* Unsupported datalink addr mask - switch does
+                                   not support arbitrary datalink address
+                                   mask. */
+    OFPBMC_BAD_NW_ADDR_MASK,    /* Unsupported network addr mask - switch does
+                                   not support arbitrary network address
+                                   mask. */
+    OFPBMC_BAD_WILDCARDS,       /* Unsupported wildcard specified in the
+                                   match. */
+    OFPBMC_BAD_FIELD,		/* Unsupported field in the match. */
+    OFPBMC_BAD_VALUE,		/* Unsupported value in a match field. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_FLOW_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_flow_mod_failed_code {
+    OFPFMFC_UNKNOWN,            /* Unspecified error. */
+    OFPFMFC_TABLE_FULL,         /* Flow not added because table was full. */
+    OFPFMFC_BAD_TABLE_ID,       /* Table does not exist */
+    OFPFMFC_OVERLAP,            /* Attempted to add overlapping flow with
+                                   CHECK_OVERLAP flag set. */
+    OFPFMFC_EPERM,              /* Permissions error. */
+    OFPFMFC_BAD_TIMEOUT,        /* Flow not added because of unsupported
+                                   idle/hard timeout. */
+    OFPFMFC_BAD_COMMAND,        /* Unsupported or unknown command. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_GROUP_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_group_mod_failed_code {
+    OFPGMFC_GROUP_EXISTS,             /* Group not added because a group ADD
+                                       * attempted to replace an
+                                       * already-present group. */
+    OFPGMFC_INVALID_GROUP,            /* Group not added because Group specified
+                                       * is invalid. */
+    OFPGMFC_WEIGHT_UNSUPPORTED,       /* Switch does not support unequal load
+                                       * sharing with select groups. */
+    OFPGMFC_OUT_OF_GROUPS,            /* The group table is full. */
+    OFPGMFC_OUT_OF_BUCKETS,           /* The maximum number of action buckets
+                                       * for a group has been exceeded. */
+    OFPGMFC_CHAINING_UNSUPPORTED,     /* Switch does not support groups that
+                                       * forward to groups. */
+    OFPGMFC_WATCH_UNSUPPORTED,        /* This group cannot watch the
+                                         watch_port or watch_group specified. */
+    OFPGMFC_LOOP,                     /* Group entry would cause a loop. */
+    OFPGMFC_UNKNOWN_GROUP,            /* Group not modified because a group
+                                         MODIFY attempted to modify a
+                                         non-existent group. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_PORT_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_port_mod_failed_code {
+    OFPPMFC_BAD_PORT,            /* Specified port number does not exist. */
+    OFPPMFC_BAD_HW_ADDR,         /* Specified hardware address does not
+                                  * match the port number. */
+    OFPPMFC_BAD_CONFIG,          /* Specified config is invalid. */
+    OFPPMFC_BAD_ADVERTISE        /* Specified advertise is invalid. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_TABLE_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_table_mod_failed_code {
+    OFPTMFC_BAD_TABLE,           /* Specified table does not exist. */
+    OFPTMFC_BAD_CONFIG           /* Specified config is invalid. */
+};
+
+/* ofp_error msg 'code' values for OFPET_QUEUE_OP_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request */
+enum ofp_queue_op_failed_code {
+    OFPQOFC_BAD_PORT,           /* Invalid port (or port does not exist). */
+    OFPQOFC_BAD_QUEUE,          /* Queue does not exist. */
+    OFPQOFC_EPERM               /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_SWITCH_CONFIG_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_switch_config_failed_code {
+    OFPSCFC_BAD_FLAGS,           /* Specified flags is invalid. */
+    OFPSCFC_BAD_LEN              /* Specified len is invalid. */
+};
+
+/* OFPT_ERROR: Error message (datapath -> controller). */
+struct ofp_error_msg {
+    struct ofp_header header;
+
+    uint16_t type;
+    uint16_t code;
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code.  No padding. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_msg) == 12);
+
+enum ofp_stats_types {
+    /* Description of this OpenFlow switch.
+     * The request body is empty.
+     * The reply body is struct ofp_desc_stats. */
+    OFPST_DESC,
+
+    /* Individual flow statistics.
+     * The request body is struct ofp_flow_stats_request.
+     * The reply body is an array of struct ofp_flow_stats. */
+    OFPST_FLOW,
+
+    /* Aggregate flow statistics.
+     * The request body is struct ofp_aggregate_stats_request.
+     * The reply body is struct ofp_aggregate_stats_reply. */
+    OFPST_AGGREGATE,
+
+    /* Flow table statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_table_stats. */
+    OFPST_TABLE,
+
+    /* Port statistics.
+     * The request body is struct ofp_port_stats_request.
+     * The reply body is an array of struct ofp_port_stats. */
+    OFPST_PORT,
+
+    /* Queue statistics for a port
+     * The request body defines the port
+     * The reply body is an array of struct ofp_queue_stats */
+    OFPST_QUEUE,
+
+    /* Group counter statistics.
+     * The request body is empty.
+     * The reply is struct ofp_group_stats. */
+    OFPST_GROUP,
+
+    /* Group description statistics.
+     * The request body is empty.
+     * The reply body is struct ofp_group_desc_stats. */
+    OFPST_GROUP_DESC,
+
+    /* Experimenter extension.
+     * The request and reply bodies begin with a 32-bit experimenter ID,
+     * which takes the same form as in "struct ofp_experimenter_header".
+     * The request and reply bodies are otherwise experimenter-defined. */
+    OFPST_EXPERIMENTER = 0xffff
+};
+
+struct ofp_stats_request {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REQ_* flags (none yet defined). */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the request. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_request) == 16);
+
+enum ofp_stats_reply_flags {
+    OFPSF_REPLY_MORE  = 1 << 0  /* More replies to follow. */
+};
+
+struct ofp_stats_reply {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REPLY_* flags. */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the reply. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_reply) == 16);
+
+#define DESC_STR_LEN   256
+#define SERIAL_NUM_LEN 32
+/* Body of reply to OFPST_DESC request.  Each entry is a NULL-terminated
+ * ASCII string. */
+struct ofp_desc_stats {
+    char mfr_desc[DESC_STR_LEN];       /* Manufacturer description. */
+    char hw_desc[DESC_STR_LEN];        /* Hardware description. */
+    char sw_desc[DESC_STR_LEN];        /* Software description. */
+    char serial_num[SERIAL_NUM_LEN];   /* Serial number. */
+    char dp_desc[DESC_STR_LEN];        /* Human readable description of datapath. */
+};
+OFP_ASSERT(sizeof(struct ofp_desc_stats) == 1056);
+
+/* Body for ofp_stats_request of type OFPST_FLOW. */
+struct ofp_flow_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats),
+                                 0xff for all tables. */
+    uint8_t pad[3];           /* Align to 64 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats_request) == 120);
+
+/* Body of reply to OFPST_FLOW request. */
+struct ofp_flow_stats {
+    uint16_t length;          /* Length of this entry. */
+    uint8_t table_id;         /* ID of table flow came from. */
+    uint8_t pad;
+    uint32_t duration_sec;    /* Time flow has been alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow has been alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t priority;        /* Priority of the entry. Only meaningful
+                                 when this is not an exact-match entry. */
+    uint16_t idle_timeout;    /* Number of seconds idle before expiration. */
+    uint16_t hard_timeout;    /* Number of seconds before expiration. */
+    uint8_t pad2[6];          /* Align to 64-bits. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+    uint64_t packet_count;    /* Number of packets in flow. */
+    uint64_t byte_count;      /* Number of bytes in flow. */
+    struct ofp_match match;   /* Description of fields. */
+    struct ofp_instruction instructions[0]; /* Instruction set. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats) == 136);
+
+/* Body for ofp_stats_request of type OFPST_AGGREGATE. */
+struct ofp_aggregate_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats)
+                                 0xff for all tables. */
+    uint8_t pad[3];           /* Align to 64 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_request) == 120);
+
+/* Body of reply to OFPST_AGGREGATE request. */
+struct ofp_aggregate_stats_reply {
+    uint64_t packet_count;    /* Number of packets in flows. */
+    uint64_t byte_count;      /* Number of bytes in flows. */
+    uint32_t flow_count;      /* Number of flows. */
+    uint8_t pad[4];           /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_reply) == 24);
+
+/* Flow match fields. */
+enum ofp_flow_match_fields {
+    OFPFMF_IN_PORT     = 1 << 0,  /* Switch input port. */
+    OFPFMF_DL_VLAN     = 1 << 1,  /* VLAN id. */
+    OFPFMF_DL_VLAN_PCP = 1 << 2,  /* VLAN priority. */
+    OFPFMF_DL_TYPE     = 1 << 3,  /* Ethernet frame type. */
+    OFPFMF_NW_TOS      = 1 << 4,  /* IP ToS (DSCP field, 6 bits). */
+    OFPFMF_NW_PROTO    = 1 << 5,  /* IP protocol. */
+    OFPFMF_TP_SRC      = 1 << 6,  /* TCP/UDP/SCTP source port. */
+    OFPFMF_TP_DST      = 1 << 7,  /* TCP/UDP/SCTP destination port. */
+    OFPFMF_MPLS_LABEL  = 1 << 8,  /* MPLS label. */
+    OFPFMF_MPLS_TC     = 1 << 9,  /* MPLS TC. */
+    OFPFMF_TYPE        = 1 << 10, /* Match type. */
+    OFPFMF_DL_SRC      = 1 << 11, /* Ethernet source address. */
+    OFPFMF_DL_DST      = 1 << 12, /* Ethernet destination address. */
+    OFPFMF_NW_SRC      = 1 << 13, /* IP source address. */
+    OFPFMF_NW_DST      = 1 << 14, /* IP destination address. */
+    OFPFMF_METADATA    = 1 << 15, /* Metadata passed between tables. */
+};
+
+/* Body of reply to OFPST_TABLE request. */
+struct ofp_table_stats {
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    char name[OFP_MAX_TABLE_NAME_LEN];
+    uint32_t wildcards;      /* Bitmap of OFPFMF_* wildcards that are
+                                supported by the table. */
+    uint32_t match;          /* Bitmap of OFPFMF_* that indicate the fields
+                                the table can match on. */
+    uint32_t instructions;   /* Bitmap of OFPIT_* values supported. */
+    uint32_t write_actions;  /* Bitmap of OFPAT_* that are supported
+                                by the table with OFPIT_WRITE_ACTIONS. */
+    uint32_t apply_actions;  /* Bitmap of OFPAT_* that are supported
+                                by the table with OFPIT_APPLY_ACTIONS. */
+    uint32_t config;         /* Bitmap of OFPTC_* values */
+    uint32_t max_entries;    /* Max number of entries supported. */
+    uint32_t active_count;   /* Number of active entries. */
+    uint64_t lookup_count;   /* Number of packets looked up in table. */
+    uint64_t matched_count;  /* Number of packets that hit table. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_stats) == 88);
+
+/* Body for ofp_stats_request of type OFPST_PORT. */
+struct ofp_port_stats_request {
+    uint32_t port_no;        /* OFPST_PORT message must request statistics
+                              * either for a single port (specified in
+                              * port_no) or for all ports (if port_no ==
+                              * OFPP_ANY). */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats_request) == 8);
+
+/* Body of reply to OFPST_PORT request. If a counter is unsupported, set
+ * the field to all ones. */
+struct ofp_port_stats {
+    uint32_t port_no;
+    uint8_t pad[4];          /* Align to 64-bits. */
+    uint64_t rx_packets;     /* Number of received packets. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t rx_bytes;       /* Number of received bytes. */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t rx_dropped;     /* Number of packets dropped by RX. */
+    uint64_t tx_dropped;     /* Number of packets dropped by TX. */
+    uint64_t rx_errors;      /* Number of receive errors.  This is a super-set
+                                of more specific receive errors and should be
+                                greater than or equal to the sum of all
+                                rx_*_err values. */
+    uint64_t tx_errors;      /* Number of transmit errors.  This is a super-set
+                                of more specific transmit errors and should be
+                                greater than or equal to the sum of all
+                                tx_*_err values (none currently defined.) */
+    uint64_t rx_frame_err;   /* Number of frame alignment errors. */
+    uint64_t rx_over_err;    /* Number of packets with RX overrun. */
+    uint64_t rx_crc_err;     /* Number of CRC errors. */
+    uint64_t collisions;     /* Number of collisions. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats) == 104);
+
+/* Body of OFPST_GROUP request. */
+struct ofp_group_stats_request {
+    uint32_t group_id;       /* All groups if OFPG_ALL. */
+    uint8_t pad[4];          /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats_request) == 8);
+
+/* Used in group stats replies. */
+struct ofp_bucket_counter {
+    uint64_t packet_count;   /* Number of packets processed by bucket. */
+    uint64_t byte_count;     /* Number of bytes processed by bucket. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket_counter) == 16);
+
+/* Body of reply to OFPST_GROUP request. */
+struct ofp_group_stats {
+    uint16_t length;         /* Length of this entry. */
+    uint8_t pad[2];          /* Align to 64 bits. */
+    uint32_t group_id;       /* Group identifier. */
+    uint32_t ref_count;      /* Number of flows or groups that directly forward
+                                to this group. */
+    uint8_t pad2[4];         /* Align to 64 bits. */
+    uint64_t packet_count;   /* Number of packets processed by group. */
+    uint64_t byte_count;     /* Number of bytes processed by group. */
+    struct ofp_bucket_counter bucket_stats[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats) == 32);
+
+/* Body of reply to OFPST_GROUP_DESC request. */
+struct ofp_group_desc_stats {
+    uint16_t length;              /* Length of this entry. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_desc_stats) == 8);
+
+/* Experimenter extension. */
+struct ofp_experimenter_header {
+    struct ofp_header header;   /* Type OFPT_EXPERIMENTER. */
+    uint32_t experimenter;      /* Experimenter ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by OpenFlow
+                                 *   consortium. */
+    uint8_t pad[4];
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_experimenter_header) == 16);
+
+/* All ones is used to indicate all queues in a port (for stats retrieval). */
+#define OFPQ_ALL      0xffffffff
+
+/* Min rate > 1000 means not configured. */
+#define OFPQ_MIN_RATE_UNCFG      0xffff
+
+enum ofp_queue_properties {
+    OFPQT_NONE = 0,       /* No property defined for queue (default). */
+    OFPQT_MIN_RATE,       /* Minimum datarate guaranteed. */
+                          /* Other types should be added here
+                           * (i.e. max rate, precedence, etc). */
+};
+
+/* Common description for a queue. */
+struct ofp_queue_prop_header {
+    uint16_t property;    /* One of OFPQT_. */
+    uint16_t len;         /* Length of property, including this header. */
+    uint8_t pad[4];       /* 64-bit alignemnt. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_header) == 8);
+
+/* Min-Rate queue property description. */
+struct ofp_queue_prop_min_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MIN, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_min_rate) == 16);
+
+/* Full description for a queue. */
+struct ofp_packet_queue {
+    uint32_t queue_id;     /* id for the specific queue. */
+    uint16_t len;          /* Length in bytes of this queue desc. */
+    uint8_t pad[2];        /* 64-bit alignment. */
+    struct ofp_queue_prop_header properties[0]; /* List of properties. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_queue) == 8);
+
+/* Query for port queue configuration. */
+struct ofp_queue_get_config_request {
+    struct ofp_header header;
+    uint32_t port;         /* Port to be queried. Should refer
+                              to a valid physical port (i.e. < OFPP_MAX) */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_request) == 16);
+
+/* Queue configuration for a given port. */
+struct ofp_queue_get_config_reply {
+    struct ofp_header header;
+    uint32_t port;
+    uint8_t pad[4];
+    struct ofp_packet_queue queues[0]; /* List of configured queues. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_reply) == 16);
+
+/* OFPAT_SET_QUEUE action struct: send packets to given queue on port. */
+struct ofp_action_set_queue {
+    uint16_t type;            /* OFPAT_SET_QUEUE. */
+    uint16_t len;             /* Len is 8. */
+    uint32_t queue_id;        /* Queue id for the packets. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_set_queue) == 8);
+
+struct ofp_queue_stats_request {
+    uint32_t port_no;        /* All ports if OFPP_ANY. */
+    uint32_t queue_id;       /* All queues if OFPQ_ALL. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats_request) == 8);
+
+struct ofp_queue_stats {
+    uint32_t port_no;
+    uint32_t queue_id;       /* Queue i.d */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t tx_errors;      /* Number of packets dropped due to overrun. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats) == 32);
+
+#endif /* openflow/openflow.h */
diff --git a/canonical/openflow.h-1.2 b/canonical/openflow.h-1.2
new file mode 100644
index 0000000..3e26ea5
--- /dev/null
+++ b/canonical/openflow.h-1.2
@@ -0,0 +1,1873 @@
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford
+ * Junior University
+ * Copyright (c) 2011 Open Networking Foundation
+ *
+ * We are making the OpenFlow specification and associated documentation
+ * (Software) available for public use and benefit with the expectation
+ * that others will use, modify and enhance the Software and contribute
+ * those enhancements back to the community. However, since we would
+ * like to make the Software available for broadest use, with as few
+ * restrictions as possible permission is hereby granted, free of
+ * charge, to any person obtaining a copy of this Software to deal in
+ * the Software under the copyrights without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT.  IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * The name and trademarks of copyright holder(s) may NOT be used in
+ * advertising or publicity pertaining to the Software or any
+ * derivatives without specific, written prior permission.
+ */
+
+/* OpenFlow: protocol between controller and datapath. */
+
+#ifndef OPENFLOW_OPENFLOW_H
+#define OPENFLOW_OPENFLOW_H 1
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+#ifdef SWIG
+#define OFP_ASSERT(EXPR)        /* SWIG can't handle OFP_ASSERT. */
+#elif !defined(__cplusplus)
+/* Build-time assertion for use in a declaration context. */
+#define OFP_ASSERT(EXPR)                                                \
+        extern int (*build_assert(void))[ sizeof(struct {               \
+                    unsigned int build_assert_failed : (EXPR) ? 1 : -1; })]
+#else /* __cplusplus */
+#define OFP_ASSERT(_EXPR) typedef int build_assert_failed[(_EXPR) ? 1 : -1]
+#endif /* __cplusplus */
+
+#ifndef SWIG
+#define OFP_PACKED __attribute__((packed))
+#else
+#define OFP_PACKED              /* SWIG doesn't understand __attribute. */
+#endif
+
+/* Version number:
+ * Non-experimental versions released: 0x01 = 1.0 ; 0x02 = 1.1 ; 0x03 = 1.2
+ * Experimental versions released: 0x81 -- 0x99
+ */
+/* The most significant bit being set in the version field indicates an
+ * experimental OpenFlow version.
+ */
+#define OFP_VERSION   0x03
+
+#define OFP_MAX_TABLE_NAME_LEN 32
+#define OFP_MAX_PORT_NAME_LEN  16
+
+#define OFP_TCP_PORT  6633
+#define OFP_SSL_PORT  6633
+
+#define OFP_ETH_ALEN 6          /* Bytes in an Ethernet address. */
+
+/* Port numbering. Ports are numbered starting from 1. */
+enum ofp_port_no {
+    /* Maximum number of physical and logical switch ports. */
+    OFPP_MAX        = 0xffffff00,
+
+    /* Reserved OpenFlow Port (fake output "ports"). */
+    OFPP_IN_PORT    = 0xfffffff8,  /* Send the packet out the input port.  This
+                                      reserved port must be explicitly used
+                                      in order to send back out of the input
+                                      port. */
+    OFPP_TABLE      = 0xfffffff9,  /* Submit the packet to the first flow table
+                                      NB: This destination port can only be
+                                      used in packet-out messages. */
+    OFPP_NORMAL     = 0xfffffffa,  /* Process with normal L2/L3 switching. */
+    OFPP_FLOOD      = 0xfffffffb,  /* All physical ports in VLAN, except input
+                                      port and those blocked or link down. */
+    OFPP_ALL        = 0xfffffffc,  /* All physical ports except input port. */
+    OFPP_CONTROLLER = 0xfffffffd,  /* Send to controller. */
+    OFPP_LOCAL      = 0xfffffffe,  /* Local openflow "port". */
+    OFPP_ANY        = 0xffffffff   /* Wildcard port used only for flow mod
+                                      (delete) and flow stats requests. Selects
+                                      all flows regardless of output port
+                                      (including flows with no output port). */
+};
+
+enum ofp_type {
+    /* Immutable messages. */
+    OFPT_HELLO              = 0,  /* Symmetric message */
+    OFPT_ERROR              = 1,  /* Symmetric message */
+    OFPT_ECHO_REQUEST       = 2,  /* Symmetric message */
+    OFPT_ECHO_REPLY         = 3,  /* Symmetric message */
+    OFPT_EXPERIMENTER       = 4,  /* Symmetric message */
+
+    /* Switch configuration messages. */
+    OFPT_FEATURES_REQUEST   = 5,  /* Controller/switch message */
+    OFPT_FEATURES_REPLY     = 6,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REQUEST = 7,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REPLY   = 8,  /* Controller/switch message */
+    OFPT_SET_CONFIG         = 9,  /* Controller/switch message */
+
+    /* Asynchronous messages. */
+    OFPT_PACKET_IN          = 10, /* Async message */
+    OFPT_FLOW_REMOVED       = 11, /* Async message */
+    OFPT_PORT_STATUS        = 12, /* Async message */
+
+    /* Controller command messages. */
+    OFPT_PACKET_OUT         = 13, /* Controller/switch message */
+    OFPT_FLOW_MOD           = 14, /* Controller/switch message */
+    OFPT_GROUP_MOD          = 15, /* Controller/switch message */
+    OFPT_PORT_MOD           = 16, /* Controller/switch message */
+    OFPT_TABLE_MOD          = 17, /* Controller/switch message */
+
+    /* Statistics messages. */
+    OFPT_STATS_REQUEST      = 18, /* Controller/switch message */
+    OFPT_STATS_REPLY        = 19, /* Controller/switch message */
+
+    /* Barrier messages. */
+    OFPT_BARRIER_REQUEST    = 20, /* Controller/switch message */
+    OFPT_BARRIER_REPLY      = 21, /* Controller/switch message */
+
+    /* Queue Configuration messages. */
+    OFPT_QUEUE_GET_CONFIG_REQUEST = 22,  /* Controller/switch message */
+    OFPT_QUEUE_GET_CONFIG_REPLY   = 23,  /* Controller/switch message */
+
+    /* Controller role change request messages. */
+    OFPT_ROLE_REQUEST       = 24, /* Controller/switch message */
+    OFPT_ROLE_REPLY         = 25, /* Controller/switch message */
+};
+
+/* Header on all OpenFlow packets. */
+struct ofp_header {
+    uint8_t version;    /* OFP_VERSION. */
+    uint8_t type;       /* One of the OFPT_ constants. */
+    uint16_t length;    /* Length including this ofp_header. */
+    uint32_t xid;       /* Transaction id associated with this packet.
+                           Replies use the same id as was in the request
+                           to facilitate pairing. */
+};
+OFP_ASSERT(sizeof(struct ofp_header) == 8);
+
+/* OFPT_HELLO.  This message has an empty body, but implementations must
+ * ignore any data included in the body, to allow for future extensions. */
+struct ofp_hello {
+    struct ofp_header header;
+};
+
+#define OFP_DEFAULT_MISS_SEND_LEN   128
+
+enum ofp_config_flags {
+    /* Handling of IP fragments. */
+    OFPC_FRAG_NORMAL   = 0,       /* No special handling for fragments. */
+    OFPC_FRAG_DROP     = 1 << 0,  /* Drop fragments. */
+    OFPC_FRAG_REASM    = 1 << 1,  /* Reassemble (only if OFPC_IP_REASM set). */
+    OFPC_FRAG_MASK     = 3,
+
+    /* TTL processing - applicable for IP and MPLS packets */
+    OFPC_INVALID_TTL_TO_CONTROLLER = 1 << 2, /* Send packets with invalid TTL
+                                                to the controller */
+};
+
+/* Switch configuration. */
+struct ofp_switch_config {
+    struct ofp_header header;
+    uint16_t flags;             /* OFPC_* flags. */
+    uint16_t miss_send_len;     /* Max bytes of new flow that datapath
+                                   should send to the controller. See
+                                   ofp_controller_max_len for valid values.
+                                   */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_config) == 12);
+
+/* Flags to indicate behavior of the flow table for unmatched packets.
+   These flags are used in ofp_table_stats messages to describe the current
+   configuration and in ofp_table_mod messages to configure table behavior. */
+enum ofp_table_config {
+    OFPTC_TABLE_MISS_CONTROLLER = 0,      /* Send to controller. */
+    OFPTC_TABLE_MISS_CONTINUE   = 1 << 0, /* Continue to the next table in the
+                                             pipeline (OpenFlow 1.0
+                                             behavior). */
+    OFPTC_TABLE_MISS_DROP       = 1 << 1, /* Drop the packet. */
+    OFPTC_TABLE_MISS_MASK       = 3
+};
+
+/* Table numbering. Tables can use any number up to OFPT_MAX. */
+enum ofp_table {
+    /* Last usable table number. */
+    OFPTT_MAX        = 0xfe,
+
+    /* Fake tables. */
+    OFPTT_ALL        = 0xff   /* Wildcard table used for table config,
+                                 flow stats and flow deletes. */
+};
+
+
+/* Configure/Modify behavior of a flow table */
+struct ofp_table_mod {
+    struct ofp_header header;
+    uint8_t table_id;       /* ID of the table, OFPTT_ALL indicates all tables */
+    uint8_t pad[3];         /* Pad to 32 bits */
+    uint32_t config;        /* Bitmap of OFPTC_* flags */
+};
+OFP_ASSERT(sizeof(struct ofp_table_mod) == 16);
+
+/* Capabilities supported by the datapath. */
+enum ofp_capabilities {
+    OFPC_FLOW_STATS     = 1 << 0,  /* Flow statistics. */
+    OFPC_TABLE_STATS    = 1 << 1,  /* Table statistics. */
+    OFPC_PORT_STATS     = 1 << 2,  /* Port statistics. */
+    OFPC_GROUP_STATS    = 1 << 3,  /* Group statistics. */
+    OFPC_IP_REASM       = 1 << 5,  /* Can reassemble IP fragments. */
+    OFPC_QUEUE_STATS    = 1 << 6,  /* Queue statistics. */
+    OFPC_PORT_BLOCKED   = 1 << 8   /* Switch will block looping ports. */
+};
+
+/* Flags to indicate behavior of the physical port.  These flags are
+ * used in ofp_port to describe the current configuration.  They are
+ * used in the ofp_port_mod message to configure the port's behavior.
+ */
+enum ofp_port_config {
+    OFPPC_PORT_DOWN    = 1 << 0,  /* Port is administratively down. */
+
+    OFPPC_NO_RECV      = 1 << 2,  /* Drop all packets received by port. */
+    OFPPC_NO_FWD       = 1 << 5,  /* Drop packets forwarded to port. */
+    OFPPC_NO_PACKET_IN = 1 << 6   /* Do not send packet-in msgs for port. */
+};
+
+/* Current state of the physical port.  These are not configurable from
+ * the controller.
+ */
+enum ofp_port_state {
+    OFPPS_LINK_DOWN    = 1 << 0,  /* No physical link present. */
+    OFPPS_BLOCKED      = 1 << 1,  /* Port is blocked */
+    OFPPS_LIVE         = 1 << 2,  /* Live for Fast Failover Group. */
+};
+
+/* Features of ports available in a datapath. */
+enum ofp_port_features {
+    OFPPF_10MB_HD    = 1 << 0,  /* 10 Mb half-duplex rate support. */
+    OFPPF_10MB_FD    = 1 << 1,  /* 10 Mb full-duplex rate support. */
+    OFPPF_100MB_HD   = 1 << 2,  /* 100 Mb half-duplex rate support. */
+    OFPPF_100MB_FD   = 1 << 3,  /* 100 Mb full-duplex rate support. */
+    OFPPF_1GB_HD     = 1 << 4,  /* 1 Gb half-duplex rate support. */
+    OFPPF_1GB_FD     = 1 << 5,  /* 1 Gb full-duplex rate support. */
+    OFPPF_10GB_FD    = 1 << 6,  /* 10 Gb full-duplex rate support. */
+    OFPPF_40GB_FD    = 1 << 7,  /* 40 Gb full-duplex rate support. */
+    OFPPF_100GB_FD   = 1 << 8,  /* 100 Gb full-duplex rate support. */
+    OFPPF_1TB_FD     = 1 << 9,  /* 1 Tb full-duplex rate support. */
+    OFPPF_OTHER      = 1 << 10, /* Other rate, not in the list. */
+
+    OFPPF_COPPER     = 1 << 11, /* Copper medium. */
+    OFPPF_FIBER      = 1 << 12, /* Fiber medium. */
+    OFPPF_AUTONEG    = 1 << 13, /* Auto-negotiation. */
+    OFPPF_PAUSE      = 1 << 14, /* Pause. */
+    OFPPF_PAUSE_ASYM = 1 << 15  /* Asymmetric pause. */
+};
+
+/* Description of a port */
+struct ofp_port {
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN];
+    uint8_t pad2[2];                  /* Align to 64 bits. */
+    char name[OFP_MAX_PORT_NAME_LEN]; /* Null-terminated */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t state;         /* Bitmap of OFPPS_* flags. */
+
+    /* Bitmaps of OFPPF_* that describe features.  All bits zeroed if
+     * unsupported or unavailable. */
+    uint32_t curr;          /* Current features. */
+    uint32_t advertised;    /* Features being advertised by the port. */
+    uint32_t supported;     /* Features supported by the port. */
+    uint32_t peer;          /* Features advertised by peer. */
+
+    uint32_t curr_speed;    /* Current port bitrate in kbps. */
+    uint32_t max_speed;     /* Max port bitrate in kbps */
+};
+OFP_ASSERT(sizeof(struct ofp_port) == 64);
+
+/* Switch features. */
+struct ofp_switch_features {
+    struct ofp_header header;
+    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
+                               a MAC address, while the upper 16-bits are
+                               implementer-defined. */
+
+    uint32_t n_buffers;     /* Max packets buffered at once. */
+
+    uint8_t n_tables;       /* Number of tables supported by datapath. */
+    uint8_t pad[3];         /* Align to 64-bits. */
+
+    /* Features. */
+    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
+    uint32_t reserved;
+
+    /* Port info.*/
+    struct ofp_port ports[0];  /* Port definitions.  The number of ports
+                                  is inferred from the length field in
+                                  the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_features) == 32);
+
+/* What changed about the physical port */
+enum ofp_port_reason {
+    OFPPR_ADD     = 0,         /* The port was added. */
+    OFPPR_DELETE  = 1,         /* The port was removed. */
+    OFPPR_MODIFY  = 2,         /* Some attribute of the port has changed. */
+};
+
+/* A physical port has changed in the datapath */
+struct ofp_port_status {
+    struct ofp_header header;
+    uint8_t reason;          /* One of OFPPR_*. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    struct ofp_port desc;
+};
+OFP_ASSERT(sizeof(struct ofp_port_status) == 80);
+
+/* Modify behavior of the physical port */
+struct ofp_port_mod {
+    struct ofp_header header;
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN]; /* The hardware address is not
+                                      configurable.  This is used to
+                                      sanity-check the request, so it must
+                                      be the same as returned in an
+                                      ofp_port struct. */
+    uint8_t pad2[2];        /* Pad to 64 bits. */
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t mask;          /* Bitmap of OFPPC_* flags to be changed. */
+
+    uint32_t advertise;     /* Bitmap of OFPPF_*.  Zero all bits to prevent
+                               any action taking place. */
+    uint8_t pad3[4];        /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_mod) == 40);
+
+/* ## -------------------------- ## */
+/* ## OpenFlow Extensible Match. ## */
+/* ## -------------------------- ## */
+
+/* The match type indicates the match structure (set of fields that compose the
+ * match) in use. The match type is placed in the type field at the beginning
+ * of all match structures. The "OpenFlow Extensible Match" type corresponds
+ * to OXM TLV format described below and must be supported by all OpenFlow
+ * switches. Extensions that define other match types may be published on the
+ * ONF wiki. Support for extensions is optional.
+ */
+enum ofp_match_type {
+    OFPMT_STANDARD = 0,       /* Deprecated. */
+    OFPMT_OXM      = 1,       /* OpenFlow Extensible Match */
+};
+
+/* Fields to match against flows */
+struct ofp_match {
+    uint16_t type;             /* One of OFPMT_* */
+    uint16_t length;           /* Length of ofp_match (excluding padding) */
+    /* Followed by:
+     *   - Exactly (length - 4) (possibly 0) bytes containing OXM TLVs, then
+     *   - Exactly ((length + 7)/8*8 - length) (between 0 and 7) bytes of
+     *     all-zero bytes
+     * In summary, ofp_match is padded as needed, to make its overall size
+     * a multiple of 8, to preserve alignement in structures using it.
+     */
+    uint8_t oxm_fields[4];     /* OXMs start here - Make compiler happy */
+};
+OFP_ASSERT(sizeof(struct ofp_match) == 8);
+
+/* Components of a OXM TLV header. */
+#define OXM_HEADER__(CLASS, FIELD, HASMASK, LENGTH) \
+    (((CLASS) << 16) | ((FIELD) << 9) | ((HASMASK) << 8) | (LENGTH))
+#define OXM_HEADER(CLASS, FIELD, LENGTH) \
+    OXM_HEADER__(CLASS, FIELD, 0, LENGTH)
+#define OXM_HEADER_W(CLASS, FIELD, LENGTH) \
+    OXM_HEADER__(CLASS, FIELD, 1, (LENGTH) * 2)
+#define OXM_CLASS(HEADER) ((HEADER) >> 16)
+#define OXM_FIELD(HEADER) (((HEADER) >> 9) & 0x7f)
+#define OXM_TYPE(HEADER) (((HEADER) >> 9) & 0x7fffff)
+#define OXM_HASMASK(HEADER) (((HEADER) >> 8) & 1)
+#define OXM_LENGTH(HEADER) ((HEADER) & 0xff)
+
+#define OXM_MAKE_WILD_HEADER(HEADER) \
+        OXM_HEADER_W(OXM_CLASS(HEADER), OXM_FIELD(HEADER), OXM_LENGTH(HEADER))
+
+/* OXM Class IDs.
+ * The high order bit differentiate reserved classes from member classes.
+ * Classes 0x0000 to 0x7FFF are member classes, allocated by ONF.
+ * Classes 0x8000 to 0xFFFE are reserved classes, reserved for standardisation.
+ */
+enum ofp_oxm_class {
+    OFPXMC_NXM_0          = 0x0000,    /* Backward compatibility with NXM */
+    OFPXMC_NXM_1          = 0x0001,    /* Backward compatibility with NXM */
+    OFPXMC_OPENFLOW_BASIC = 0x8000,    /* Basic class for OpenFlow */
+    OFPXMC_EXPERIMENTER   = 0xFFFF,    /* Experimenter class */
+};
+
+/* OXM Flow match field types for OpenFlow basic class. */
+enum oxm_ofb_match_fields {
+    OFPXMT_OFB_IN_PORT        = 0,  /* Switch input port. */
+    OFPXMT_OFB_IN_PHY_PORT    = 1,  /* Switch physical input port. */
+    OFPXMT_OFB_METADATA       = 2,  /* Metadata passed between tables. */
+    OFPXMT_OFB_ETH_DST        = 3,  /* Ethernet destination address. */
+    OFPXMT_OFB_ETH_SRC        = 4,  /* Ethernet source address. */
+    OFPXMT_OFB_ETH_TYPE       = 5,  /* Ethernet frame type. */
+    OFPXMT_OFB_VLAN_VID       = 6,  /* VLAN id. */
+    OFPXMT_OFB_VLAN_PCP       = 7,  /* VLAN priority. */
+    OFPXMT_OFB_IP_DSCP        = 8,  /* IP DSCP (6 bits in ToS field). */
+    OFPXMT_OFB_IP_ECN         = 9,  /* IP ECN (2 bits in ToS field). */
+    OFPXMT_OFB_IP_PROTO       = 10, /* IP protocol. */
+    OFPXMT_OFB_IPV4_SRC       = 11, /* IPv4 source address. */
+    OFPXMT_OFB_IPV4_DST       = 12, /* IPv4 destination address. */
+    OFPXMT_OFB_TCP_SRC        = 13, /* TCP source port. */
+    OFPXMT_OFB_TCP_DST        = 14, /* TCP destination port. */
+    OFPXMT_OFB_UDP_SRC        = 15, /* UDP source port. */
+    OFPXMT_OFB_UDP_DST        = 16, /* UDP destination port. */
+    OFPXMT_OFB_SCTP_SRC       = 17, /* SCTP source port. */
+    OFPXMT_OFB_SCTP_DST       = 18, /* SCTP destination port. */
+    OFPXMT_OFB_ICMPV4_TYPE    = 19, /* ICMP type. */
+    OFPXMT_OFB_ICMPV4_CODE    = 20, /* ICMP code. */
+    OFPXMT_OFB_ARP_OP         = 21, /* ARP opcode. */
+    OFPXMT_OFB_ARP_SPA        = 22, /* ARP source IPv4 address. */
+    OFPXMT_OFB_ARP_TPA        = 23, /* ARP target IPv4 address. */
+    OFPXMT_OFB_ARP_SHA        = 24, /* ARP source hardware address. */
+    OFPXMT_OFB_ARP_THA        = 25, /* ARP target hardware address. */
+    OFPXMT_OFB_IPV6_SRC       = 26, /* IPv6 source address. */
+    OFPXMT_OFB_IPV6_DST       = 27, /* IPv6 destination address. */
+    OFPXMT_OFB_IPV6_FLABEL    = 28, /* IPv6 Flow Label */
+    OFPXMT_OFB_ICMPV6_TYPE    = 29, /* ICMPv6 type. */
+    OFPXMT_OFB_ICMPV6_CODE    = 30, /* ICMPv6 code. */
+    OFPXMT_OFB_IPV6_ND_TARGET = 31, /* Target address for ND. */
+    OFPXMT_OFB_IPV6_ND_SLL    = 32, /* Source link-layer for ND. */
+    OFPXMT_OFB_IPV6_ND_TLL    = 33, /* Target link-layer for ND. */
+    OFPXMT_OFB_MPLS_LABEL     = 34, /* MPLS label. */
+    OFPXMT_OFB_MPLS_TC        = 35, /* MPLS TC. */
+};
+
+#define OFPXMT_OFB_ALL    ((UINT64_C(1) << 36) - 1)
+
+/* OpenFlow port on which the packet was received.
+ * May be a physical port, a logical port, or the reserved port OFPP_LOCAL
+ *
+ * Prereqs: None.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IN_PORT    OXM_HEADER  (0x8000, OFPXMT_OFB_IN_PORT, 4)
+
+/* Physical port on which the packet was received.
+ *
+ * Consider a packet received on a tunnel interface defined over a link
+ * aggregation group (LAG) with two physical port members.  If the tunnel
+ * interface is the logical port bound to OpenFlow.  In this case,
+ * OFPXMT_OF_IN_PORT is the tunnel's port number and OFPXMT_OF_IN_PHY_PORT is
+ * the physical port number of the LAG on which the tunnel is configured.
+ *
+ * When a packet is received directly on a physical port and not processed by a
+ * logical port, OFPXMT_OF_IN_PORT and OFPXMT_OF_IN_PHY_PORT have the same
+ * value.
+ *
+ * This field is usually not available in a regular match and only available
+ * in ofp_packet_in messages when it's different from OXM_OF_IN_PORT.
+ *
+ * Prereqs: OXM_OF_IN_PORT must be present.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IN_PHY_PORT OXM_HEADER  (0x8000, OFPXMT_OFB_IN_PHY_PORT, 4)
+
+/* Table metadata.
+ *
+ * Prereqs: None.
+ *
+ * Format: 64-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_METADATA   OXM_HEADER  (0x8000, OFPXMT_OFB_METADATA, 8)
+#define OXM_OF_METADATA_W OXM_HEADER_W(0x8000, OFPXMT_OFB_METADATA, 8)
+
+/* Source or destination address in Ethernet header.
+ *
+ * Prereqs: None.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Arbitrary masks. */
+#define OXM_OF_ETH_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_DST, 6)
+#define OXM_OF_ETH_DST_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ETH_DST, 6)
+#define OXM_OF_ETH_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_SRC, 6)
+#define OXM_OF_ETH_SRC_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ETH_SRC, 6)
+
+/* Packet's Ethernet type.
+ *
+ * Prereqs: None.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ETH_TYPE   OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_TYPE, 2)
+
+/* The VLAN id is 12-bits, so we can use the entire 16 bits to indicate
+ * special conditions.
+ */
+enum ofp_vlan_id {
+    OFPVID_PRESENT = 0x1000, /* Bit that indicate that a VLAN id is set */
+    OFPVID_NONE    = 0x0000, /* No VLAN id was set. */
+};
+/* Define for compatibility */
+#define OFP_VLAN_NONE      OFPVID_NONE
+
+/* 802.1Q VID.
+ *
+ * For a packet with an 802.1Q header, this is the VLAN-ID (VID) from the
+ * outermost tag, with the CFI bit forced to 1. For a packet with no 802.1Q
+ * header, this has value OFPVID_NONE.
+ *
+ * Prereqs: None.
+ *
+ * Format: 16-bit integer in network byte order with bit 13 indicating
+ * presence of VLAN header and 3 most-significant bits forced to 0.
+ * Only the lower 13 bits have meaning.
+ *
+ * Masking: Arbitrary masks.
+ *
+ * This field can be used in various ways:
+ *
+ *   - If it is not constrained at all, the nx_match matches packets without
+ *     an 802.1Q header or with an 802.1Q header that has any VID value.
+ *
+ *   - Testing for an exact match with 0x0 matches only packets without
+ *     an 802.1Q header.
+ *
+ *   - Testing for an exact match with a VID value with CFI=1 matches packets
+ *     that have an 802.1Q header with a specified VID.
+ *
+ *   - Testing for an exact match with a nonzero VID value with CFI=0 does
+ *     not make sense.  The switch may reject this combination.
+ *
+ *   - Testing with nxm_value=0, nxm_mask=0x0fff matches packets with no 802.1Q
+ *     header or with an 802.1Q header with a VID of 0.
+ *
+ *   - Testing with nxm_value=0x1000, nxm_mask=0x1000 matches packets with
+ *     an 802.1Q header that has any VID value.
+ */
+#define OXM_OF_VLAN_VID   OXM_HEADER  (0x8000, OFPXMT_OFB_VLAN_VID, 2)
+#define OXM_OF_VLAN_VID_W OXM_HEADER_W(0x8000, OFPXMT_OFB_VLAN_VID, 2)
+
+/* 802.1Q PCP.
+ *
+ * For a packet with an 802.1Q header, this is the VLAN-PCP from the
+ * outermost tag.  For a packet with no 802.1Q header, this has value
+ * 0.
+ *
+ * Prereqs: OXM_OF_VLAN_VID must be different from OFPVID_NONE.
+ *
+ * Format: 8-bit integer with 5 most-significant bits forced to 0.
+ * Only the lower 3 bits have meaning.
+ *
+ * Masking: Not maskable.
+ */
+#define OXM_OF_VLAN_PCP   OXM_HEADER  (0x8000, OFPXMT_OFB_VLAN_PCP, 1)
+
+/* The Diff Serv Code Point (DSCP) bits of the IP header.
+ * Part of the IPv4 ToS field or the IPv6 Traffic Class field.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer with 2 most-significant bits forced to 0.
+ * Only the lower 6 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_DSCP     OXM_HEADER  (0x8000, OFPXMT_OFB_IP_DSCP, 1)
+
+/* The ECN bits of the IP header.
+ * Part of the IPv4 ToS field or the IPv6 Traffic Class field.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer with 6 most-significant bits forced to 0.
+ * Only the lower 2 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_ECN     OXM_HEADER  (0x8000, OFPXMT_OFB_IP_ECN, 1)
+
+/* The "protocol" byte in the IP header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_PROTO   OXM_HEADER  (0x8000, OFPXMT_OFB_IP_PROTO, 1)
+
+/* The source or destination address in the IP header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0800 exactly.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_IPV4_SRC     OXM_HEADER  (0x8000, OFPXMT_OFB_IPV4_SRC, 4)
+#define OXM_OF_IPV4_SRC_W   OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV4_SRC, 4)
+#define OXM_OF_IPV4_DST     OXM_HEADER  (0x8000, OFPXMT_OFB_IPV4_DST, 4)
+#define OXM_OF_IPV4_DST_W   OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV4_DST, 4)
+
+/* The source or destination port in the TCP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 6 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_TCP_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_TCP_SRC, 2)
+#define OXM_OF_TCP_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_TCP_DST, 2)
+
+/* The source or destination port in the UDP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 17 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_UDP_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_UDP_SRC, 2)
+#define OXM_OF_UDP_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_UDP_DST, 2)
+
+/* The source or destination port in the SCTP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 132 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_SCTP_SRC   OXM_HEADER  (0x8000, OFPXMT_OFB_SCTP_SRC, 2)
+#define OXM_OF_SCTP_DST   OXM_HEADER  (0x8000, OFPXMT_OFB_SCTP_DST, 2)
+
+/* The type or code in the ICMP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x0800 exactly.
+ *   OXM_OF_IP_PROTO must match 1 exactly.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ICMPV4_TYPE  OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV4_TYPE, 1)
+#define OXM_OF_ICMPV4_CODE  OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV4_CODE, 1)
+
+/* ARP opcode.
+ *
+ * For an Ethernet+IP ARP packet, the opcode in the ARP header.  Always 0
+ * otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ARP_OP     OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_OP, 2)
+
+/* For an Ethernet+IP ARP packet, the source or target protocol address
+ * in the ARP header.  Always 0 otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_ARP_SPA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_SPA, 4)
+#define OXM_OF_ARP_SPA_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ARP_SPA, 4)
+#define OXM_OF_ARP_TPA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_TPA, 4)
+#define OXM_OF_ARP_TPA_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ARP_TPA, 4)
+
+/* For an Ethernet+IP ARP packet, the source or target hardware address
+ * in the ARP header.  Always 0 otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ARP_SHA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_SHA, 6)
+#define OXM_OF_ARP_THA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_THA, 6)
+
+/* The source or destination address in the IPv6 header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *
+ * Format: 128-bit IPv6 address.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_IPV6_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_SRC, 16)
+#define OXM_OF_IPV6_SRC_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV6_SRC, 16)
+#define OXM_OF_IPV6_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_DST, 16)
+#define OXM_OF_IPV6_DST_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV6_DST, 16)
+
+/* The IPv6 Flow Label
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly
+ *
+ * Format: 32-bit integer with 12 most-significant bits forced to 0.
+ * Only the lower 20 bits have meaning.
+ *
+ * Masking: Maskable. */
+#define OXM_OF_IPV6_FLABEL   OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_FLABEL, 4)
+
+/* The type or code in the ICMPv6 header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ICMPV6_TYPE OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV6_TYPE, 1)
+#define OXM_OF_ICMPV6_CODE OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV6_CODE, 1)
+
+/* The target address in an IPv6 Neighbor Discovery message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be either 135 or 136.
+ *
+ * Format: 128-bit IPv6 address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_TARGET OXM_HEADER (0x8000, OFPXMT_OFB_IPV6_ND_TARGET, 16)
+
+/* The source link-layer address option in an IPv6 Neighbor Discovery
+ * message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be exactly 135.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_SLL  OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_ND_SLL, 6)
+
+/* The target link-layer address option in an IPv6 Neighbor Discovery
+ * message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be exactly 136.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_TLL  OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_ND_TLL, 6)
+
+/* The LABEL in the first MPLS shim header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x8847 or 0x8848 exactly.
+ *
+ * Format: 32-bit integer in network byte order with 12 most-significant
+ * bits forced to 0. Only the lower 20 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_MPLS_LABEL  OXM_HEADER  (0x8000, OFPXMT_OFB_MPLS_LABEL, 4)
+
+/* The TC in the first MPLS shim header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x8847 or 0x8848 exactly.
+ *
+ * Format: 8-bit integer with 5 most-significant bits forced to 0.
+ * Only the lower 3 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_MPLS_TC     OXM_HEADER  (0x8000, OFPXMT_OFB_MPLS_TC, 1)
+
+/* Header for OXM experimenter match fields. */
+struct ofp_oxm_experimenter_header {
+    uint32_t oxm_header;        /* oxm_class = OFPXMC_EXPERIMENTER */
+    uint32_t experimenter;      /* Experimenter ID which takes the same
+                                   form as in struct ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_oxm_experimenter_header) == 8);
+
+/* ## ----------------- ## */
+/* ## OpenFlow Actions. ## */
+/* ## ----------------- ## */
+
+enum ofp_action_type {
+    OFPAT_OUTPUT       = 0,  /* Output to switch port. */
+    OFPAT_COPY_TTL_OUT = 11, /* Copy TTL "outwards" -- from next-to-outermost
+                                to outermost */
+    OFPAT_COPY_TTL_IN  = 12, /* Copy TTL "inwards" -- from outermost to
+                               next-to-outermost */
+    OFPAT_SET_MPLS_TTL = 15, /* MPLS TTL */
+    OFPAT_DEC_MPLS_TTL = 16, /* Decrement MPLS TTL */
+
+    OFPAT_PUSH_VLAN    = 17, /* Push a new VLAN tag */
+    OFPAT_POP_VLAN     = 18, /* Pop the outer VLAN tag */
+    OFPAT_PUSH_MPLS    = 19, /* Push a new MPLS tag */
+    OFPAT_POP_MPLS     = 20, /* Pop the outer MPLS tag */
+    OFPAT_SET_QUEUE    = 21, /* Set queue id when outputting to a port */
+    OFPAT_GROUP        = 22, /* Apply group. */
+    OFPAT_SET_NW_TTL   = 23, /* IP TTL. */
+    OFPAT_DEC_NW_TTL   = 24, /* Decrement IP TTL. */
+    OFPAT_SET_FIELD    = 25, /* Set a header field using OXM TLV format. */
+    OFPAT_EXPERIMENTER = 0xffff
+};
+
+enum ofp_controller_max_len {
+	OFPCML_MAX       = 0xffe5, /* maximum max_len value which can be used
+	                              to request a specific byte length. */
+	OFPCML_NO_BUFFER = 0xffff  /* indicates that no buffering should be
+	                              applied and the whole packet is to be
+	                              sent to the controller. */
+};
+
+/* Action structure for OFPAT_OUTPUT, which sends packets out 'port'.
+ * When the 'port' is the OFPP_CONTROLLER, 'max_len' indicates the max
+ * number of bytes to send.  A 'max_len' of zero means no bytes of the
+ * packet should be sent. A 'max_len' of OFPCML_NO_BUFFER means that
+ * the packet is not buffered and the complete packet is to be sent to
+ * the controller. */
+struct ofp_action_output {
+    uint16_t type;                  /* OFPAT_OUTPUT. */
+    uint16_t len;                   /* Length is 16. */
+    uint32_t port;                  /* Output port. */
+    uint16_t max_len;               /* Max length to send to controller. */
+    uint8_t pad[6];                 /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_output) == 16);
+
+/* Action structure for OFPAT_SET_MPLS_TTL. */
+struct ofp_action_mpls_ttl {
+    uint16_t type;                  /* OFPAT_SET_MPLS_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t mpls_ttl;               /* MPLS TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_mpls_ttl) == 8);
+
+/* Action structure for OFPAT_PUSH_VLAN/MPLS. */
+struct ofp_action_push {
+    uint16_t type;                  /* OFPAT_PUSH_VLAN/MPLS. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_push) == 8);
+
+/* Action structure for OFPAT_POP_MPLS. */
+struct ofp_action_pop_mpls {
+    uint16_t type;                  /* OFPAT_POP_MPLS. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_pop_mpls) == 8);
+
+/* Action structure for OFPAT_GROUP. */
+struct ofp_action_group {
+    uint16_t type;                  /* OFPAT_GROUP. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t group_id;              /* Group identifier. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_group) == 8);
+
+/* Action structure for OFPAT_SET_NW_TTL. */
+struct ofp_action_nw_ttl {
+    uint16_t type;                  /* OFPAT_SET_NW_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_ttl;                 /* IP TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_ttl) == 8);
+
+/* Action structure for OFPAT_SET_FIELD. */
+struct ofp_action_set_field {
+    uint16_t type;                  /* OFPAT_SET_FIELD. */
+    uint16_t len;                   /* Length is padded to 64 bits. */
+    /* Followed by:
+     *   - Exactly oxm_len bytes containing a single OXM TLV, then
+     *   - Exactly ((oxm_len + 4) + 7)/8*8 - (oxm_len + 4) (between 0 and 7)
+     *     bytes of all-zero bytes
+     */
+    uint8_t field[4];               /* OXM TLV - Make compiler happy */
+};
+OFP_ASSERT(sizeof(struct ofp_action_set_field) == 8);
+
+/* Action header for OFPAT_EXPERIMENTER.
+ * The rest of the body is experimenter-defined. */
+struct ofp_action_experimenter_header {
+    uint16_t type;                  /* OFPAT_EXPERIMENTER. */
+    uint16_t len;                   /* Length is a multiple of 8. */
+    uint32_t experimenter;          /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_experimenter_header) == 8);
+
+/* Action header that is common to all actions.  The length includes the
+ * header and any padding used to make the action 64-bit aligned.
+ * NB: The length of an action *must* always be a multiple of eight. */
+struct ofp_action_header {
+    uint16_t type;                  /* One of OFPAT_*. */
+    uint16_t len;                   /* Length of action, including this
+                                       header.  This is the length of action,
+                                       including any padding to make it
+                                       64-bit aligned. */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_action_header) == 8);
+
+/* ## ---------------------- ## */
+/* ## OpenFlow Instructions. ## */
+/* ## ---------------------- ## */
+
+enum ofp_instruction_type {
+    OFPIT_GOTO_TABLE = 1,       /* Setup the next table in the lookup
+                                   pipeline */
+    OFPIT_WRITE_METADATA = 2,   /* Setup the metadata field for use later in
+                                   pipeline */
+    OFPIT_WRITE_ACTIONS = 3,    /* Write the action(s) onto the datapath action
+                                   set */
+    OFPIT_APPLY_ACTIONS = 4,    /* Applies the action(s) immediately */
+    OFPIT_CLEAR_ACTIONS = 5,    /* Clears all actions from the datapath
+                                   action set */
+
+    OFPIT_EXPERIMENTER = 0xFFFF  /* Experimenter instruction */
+};
+
+/* Generic ofp_instruction structure */
+struct ofp_instruction {
+    uint16_t type;                /* Instruction type */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t pad[4];               /* Align to 64-bits */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction) == 8);
+
+/* Instruction structure for OFPIT_GOTO_TABLE */
+struct ofp_instruction_goto_table {
+    uint16_t type;                /* OFPIT_GOTO_TABLE */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t table_id;             /* Set next table in the lookup pipeline */
+    uint8_t pad[3];               /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_goto_table) == 8);
+
+/* Instruction structure for OFPIT_WRITE_METADATA */
+struct ofp_instruction_write_metadata {
+    uint16_t type;                /* OFPIT_WRITE_METADATA */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t pad[4];               /* Align to 64-bits */
+    uint64_t metadata;            /* Metadata value to write */
+    uint64_t metadata_mask;       /* Metadata write bitmask */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_write_metadata) == 24);
+
+/* Instruction structure for OFPIT_WRITE/APPLY/CLEAR_ACTIONS */
+struct ofp_instruction_actions {
+    uint16_t type;              /* One of OFPIT_*_ACTIONS */
+    uint16_t len;               /* Length of this struct in bytes. */
+    uint8_t pad[4];             /* Align to 64-bits */
+    struct ofp_action_header actions[0];  /* Actions associated with
+                                             OFPIT_WRITE_ACTIONS and
+                                             OFPIT_APPLY_ACTIONS */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_actions) == 8);
+
+/* Instruction structure for experimental instructions */
+struct ofp_instruction_experimenter {
+    uint16_t type;		/* OFPIT_EXPERIMENTER */
+    uint16_t len;               /* Length of this struct in bytes */
+    uint32_t experimenter;      /* Experimenter ID which takes the same form
+                                   as in struct ofp_experimenter_header. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_experimenter) == 8);
+
+/* ## --------------------------- ## */
+/* ## OpenFlow Flow Modification. ## */
+/* ## --------------------------- ## */
+
+enum ofp_flow_mod_command {
+    OFPFC_ADD           = 0, /* New flow. */
+    OFPFC_MODIFY        = 1, /* Modify all matching flows. */
+    OFPFC_MODIFY_STRICT = 2, /* Modify entry strictly matching wildcards and
+                                priority. */
+    OFPFC_DELETE        = 3, /* Delete all matching flows. */
+    OFPFC_DELETE_STRICT = 4, /* Delete entry strictly matching wildcards and
+                                priority. */
+};
+
+/* Value used in "idle_timeout" and "hard_timeout" to indicate that the entry
+ * is permanent. */
+#define OFP_FLOW_PERMANENT 0
+
+/* By default, choose a priority in the middle. */
+#define OFP_DEFAULT_PRIORITY 0x8000
+
+enum ofp_flow_mod_flags {
+    OFPFF_SEND_FLOW_REM = 1 << 0,  /* Send flow removed message when flow
+                                    * expires or is deleted. */
+    OFPFF_CHECK_OVERLAP = 1 << 1,  /* Check for overlapping entries first. */
+    OFPFF_RESET_COUNTS  = 1 << 2   /* Reset flow packet and byte counts. */
+};
+
+/* Flow setup and teardown (controller -> datapath). */
+struct ofp_flow_mod {
+    struct ofp_header header;
+    uint64_t cookie;             /* Opaque controller-issued identifier. */
+    uint64_t cookie_mask;        /* Mask used to restrict the cookie bits
+                                    that must match when the command is
+                                    OFPFC_MODIFY* or OFPFC_DELETE*. A value
+                                    of 0 indicates no restriction. */
+
+    /* Flow actions. */
+    uint8_t table_id;             /* ID of the table to put the flow in.
+                                     For OFPFC_DELETE_* commands, OFPTT_ALL
+                                     can also be used to delete matching
+                                     flows from all tables. */
+    uint8_t command;              /* One of OFPFC_*. */
+    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
+    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
+    uint16_t priority;            /* Priority level of flow entry. */
+    uint32_t buffer_id;           /* Buffered packet to apply to, or
+                                     OFP_NO_BUFFER.
+                                     Not meaningful for OFPFC_DELETE*. */
+    uint32_t out_port;            /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output port.  A value of OFPP_ANY
+                                     indicates no restriction. */
+    uint32_t out_group;           /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output group.  A value of OFPG_ANY
+                                     indicates no restriction. */
+    uint16_t flags;               /* One of OFPFF_*. */
+    uint8_t pad[2];
+    struct ofp_match match;       /* Fields to match. Variable size. */
+    //struct ofp_instruction instructions[0]; /* Instruction set */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_mod) == 56);
+
+/* Group numbering. Groups can use any number up to OFPG_MAX. */
+enum ofp_group {
+    /* Last usable group number. */
+    OFPG_MAX        = 0xffffff00,
+
+    /* Fake groups. */
+    OFPG_ALL        = 0xfffffffc,  /* Represents all groups for group delete
+                                      commands. */
+    OFPG_ANY        = 0xffffffff   /* Wildcard group used only for flow stats
+                                      requests. Selects all flows regardless of
+                                      group (including flows with no group).
+                                      */
+};
+
+/* Group commands */
+enum ofp_group_mod_command {
+    OFPGC_ADD    = 0,       /* New group. */
+    OFPGC_MODIFY = 1,       /* Modify all matching groups. */
+    OFPGC_DELETE = 2,       /* Delete all matching groups. */
+};
+
+/* Bucket for use in groups. */
+struct ofp_bucket {
+    uint16_t len;                   /* Length the bucket in bytes, including
+                                       this header and any padding to make it
+                                       64-bit aligned. */
+    uint16_t weight;                /* Relative weight of bucket.  Only
+                                       defined for select groups. */
+    uint32_t watch_port;            /* Port whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint32_t watch_group;           /* Group whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint8_t pad[4];
+    struct ofp_action_header actions[0]; /* The action length is inferred
+                                           from the length field in the
+                                           header. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket) == 16);
+
+/* Group setup and teardown (controller -> datapath). */
+struct ofp_group_mod {
+    struct ofp_header header;
+    uint16_t command;             /* One of OFPGC_*. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0]; /* The length of the bucket array is inferred
+                                     from the length field in the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_mod) == 16);
+
+/* Group types.  Values in the range [128, 255] are reserved for experimental
+ * use. */
+enum ofp_group_type {
+    OFPGT_ALL      = 0, /* All (multicast/broadcast) group.  */
+    OFPGT_SELECT   = 1, /* Select group. */
+    OFPGT_INDIRECT = 2, /* Indirect group. */
+    OFPGT_FF       = 3, /* Fast failover group. */
+};
+
+/* Special buffer-id to indicate 'no buffer' */
+#define OFP_NO_BUFFER 0xffffffff
+
+/* Send packet (controller -> datapath). */
+struct ofp_packet_out {
+    struct ofp_header header;
+    uint32_t buffer_id;           /* ID assigned by datapath (OFP_NO_BUFFER
+                                     if none). */
+    uint32_t in_port;             /* Packet's input port or OFPP_CONTROLLER. */
+    uint16_t actions_len;         /* Size of action array in bytes. */
+    uint8_t pad[6];
+    struct ofp_action_header actions[0]; /* Action list. */
+    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
+                                     from the length field in the header.
+                                     (Only meaningful if buffer_id == -1.) */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_out) == 24);
+
+/* Why is this packet being sent to the controller? */
+enum ofp_packet_in_reason {
+    OFPR_NO_MATCH    = 0,   /* No matching flow. */
+    OFPR_ACTION      = 1,   /* Action explicitly output to controller. */
+    OFPR_INVALID_TTL = 2,   /* Packet has invalid TTL */
+};
+
+/* Packet received on port (datapath -> controller). */
+struct ofp_packet_in {
+    struct ofp_header header;
+    uint32_t buffer_id;     /* ID assigned by datapath. */
+    uint16_t total_len;     /* Full length of frame. */
+    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
+    uint8_t table_id;       /* ID of the table that was looked up */
+    struct ofp_match match; /* Packet metadata. Variable size. */
+    /* Followed by:
+     *   - Exactly 2 all-zero padding bytes, then
+     *   - An Ethernet frame whose length is inferred from header.length.
+     * The padding bytes preceding the Ethernet frame ensure that the IP
+     * header (if any) following the Ethernet header is 32-bit aligned.
+     */
+    //uint8_t pad[2];       /* Align to 64 bit + 16 bit */
+    //uint8_t data[0];      /* Ethernet frame */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_in) == 24);
+
+/* Why was this flow removed? */
+enum ofp_flow_removed_reason {
+    OFPRR_IDLE_TIMEOUT = 0,     /* Flow idle time exceeded idle_timeout. */
+    OFPRR_HARD_TIMEOUT = 1,     /* Time exceeded hard_timeout. */
+    OFPRR_DELETE       = 2,     /* Evicted by a DELETE flow mod. */
+    OFPRR_GROUP_DELETE = 3,     /* Group was removed. */
+};
+
+/* Flow removed (datapath -> controller). */
+struct ofp_flow_removed {
+    struct ofp_header header;
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+
+    uint16_t priority;        /* Priority level of flow entry. */
+    uint8_t reason;           /* One of OFPRR_*. */
+    uint8_t table_id;         /* ID of the table */
+
+    uint32_t duration_sec;    /* Time flow was alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow was alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t idle_timeout;    /* Idle timeout from original flow mod. */
+    uint16_t hard_timeout;    /* Hard timeout from original flow mod. */
+    uint64_t packet_count;
+    uint64_t byte_count;
+    struct ofp_match match;   /* Description of fields. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_removed) == 56);
+
+/* Values for 'type' in ofp_error_message.  These values are immutable: they
+ * will not change in future versions of the protocol (although new values may
+ * be added). */
+enum ofp_error_type {
+    OFPET_HELLO_FAILED         = 0,  /* Hello protocol failed. */
+    OFPET_BAD_REQUEST          = 1,  /* Request was not understood. */
+    OFPET_BAD_ACTION           = 2,  /* Error in action description. */
+    OFPET_BAD_INSTRUCTION      = 3,  /* Error in instruction list. */
+    OFPET_BAD_MATCH            = 4,  /* Error in match. */
+    OFPET_FLOW_MOD_FAILED      = 5,  /* Problem modifying flow entry. */
+    OFPET_GROUP_MOD_FAILED     = 6,  /* Problem modifying group entry. */
+    OFPET_PORT_MOD_FAILED      = 7,  /* Port mod request failed. */
+    OFPET_TABLE_MOD_FAILED     = 8,  /* Table mod request failed. */
+    OFPET_QUEUE_OP_FAILED      = 9,  /* Queue operation failed. */
+    OFPET_SWITCH_CONFIG_FAILED = 10, /* Switch config request failed. */
+    OFPET_ROLE_REQUEST_FAILED  = 11, /* Controller Role request failed. */
+    OFPET_EXPERIMENTER = 0xffff      /* Experimenter error messages. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_HELLO_FAILED.  'data' contains an
+ * ASCII text string that may give failure details. */
+enum ofp_hello_failed_code {
+    OFPHFC_INCOMPATIBLE = 0,    /* No compatible version. */
+    OFPHFC_EPERM        = 1,    /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_REQUEST.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_request_code {
+    OFPBRC_BAD_VERSION      = 0,  /* ofp_header.version not supported. */
+    OFPBRC_BAD_TYPE         = 1,  /* ofp_header.type not supported. */
+    OFPBRC_BAD_STAT         = 2,  /* ofp_stats_request.type not supported. */
+    OFPBRC_BAD_EXPERIMENTER = 3,  /* Experimenter id not supported
+                                   * (in ofp_experimenter_header or
+                                   * ofp_stats_request or ofp_stats_reply). */
+    OFPBRC_BAD_EXP_TYPE     = 4,  /* Experimenter type not supported. */
+    OFPBRC_EPERM            = 5,  /* Permissions error. */
+    OFPBRC_BAD_LEN          = 6,  /* Wrong request length for type. */
+    OFPBRC_BUFFER_EMPTY     = 7,  /* Specified buffer has already been used. */
+    OFPBRC_BUFFER_UNKNOWN   = 8,  /* Specified buffer does not exist. */
+    OFPBRC_BAD_TABLE_ID     = 9,  /* Specified table-id invalid or does not
+                                   * exist. */
+    OFPBRC_IS_SLAVE         = 10, /* Denied because controller is slave. */
+    OFPBRC_BAD_PORT         = 11, /* Invalid port. */
+    OFPBRC_BAD_PACKET       = 12, /* Invalid packet in packet-out. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_ACTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_action_code {
+    OFPBAC_BAD_TYPE           = 0,  /* Unknown action type. */
+    OFPBAC_BAD_LEN            = 1,  /* Length problem in actions. */
+    OFPBAC_BAD_EXPERIMENTER   = 2,  /* Unknown experimenter id specified. */
+    OFPBAC_BAD_EXP_TYPE       = 3,  /* Unknown action for experimenter id. */
+    OFPBAC_BAD_OUT_PORT       = 4,  /* Problem validating output port. */
+    OFPBAC_BAD_ARGUMENT       = 5,  /* Bad action argument. */
+    OFPBAC_EPERM              = 6,  /* Permissions error. */
+    OFPBAC_TOO_MANY           = 7,  /* Can't handle this many actions. */
+    OFPBAC_BAD_QUEUE          = 8,  /* Problem validating output queue. */
+    OFPBAC_BAD_OUT_GROUP      = 9,  /* Invalid group id in forward action. */
+    OFPBAC_MATCH_INCONSISTENT = 10, /* Action can't apply for this match,
+                                       or Set-Field missing prerequisite. */
+    OFPBAC_UNSUPPORTED_ORDER  = 11, /* Action order is unsupported for the
+                                 action list in an Apply-Actions instruction */
+    OFPBAC_BAD_TAG            = 12, /* Actions uses an unsupported
+                                       tag/encap. */
+    OFPBAC_BAD_SET_TYPE       = 13, /* Unsupported type in SET_FIELD action. */
+    OFPBAC_BAD_SET_LEN        = 14, /* Length problem in SET_FIELD action. */
+    OFPBAC_BAD_SET_ARGUMENT   = 15, /* Bad argument in SET_FIELD action. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_INSTRUCTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_instruction_code {
+    OFPBIC_UNKNOWN_INST     = 0, /* Unknown instruction. */
+    OFPBIC_UNSUP_INST       = 1, /* Switch or table does not support the
+                                    instruction. */
+    OFPBIC_BAD_TABLE_ID     = 2, /* Invalid Table-ID specified. */
+    OFPBIC_UNSUP_METADATA   = 3, /* Metadata value unsupported by datapath. */
+    OFPBIC_UNSUP_METADATA_MASK = 4, /* Metadata mask value unsupported by
+                                       datapath. */
+    OFPBIC_BAD_EXPERIMENTER = 5, /* Unknown experimenter id specified. */
+    OFPBIC_BAD_EXP_TYPE     = 6, /* Unknown instruction for experimenter id. */
+    OFPBIC_BAD_LEN          = 7, /* Length problem in instructions. */
+    OFPBIC_EPERM            = 8, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_MATCH.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_match_code {
+    OFPBMC_BAD_TYPE         = 0,  /* Unsupported match type specified by the
+                                     match */
+    OFPBMC_BAD_LEN          = 1,  /* Length problem in match. */
+    OFPBMC_BAD_TAG          = 2,  /* Match uses an unsupported tag/encap. */
+    OFPBMC_BAD_DL_ADDR_MASK = 3,  /* Unsupported datalink addr mask - switch
+                                     does not support arbitrary datalink
+                                     address mask. */
+    OFPBMC_BAD_NW_ADDR_MASK = 4,  /* Unsupported network addr mask - switch
+                                     does not support arbitrary network
+                                     address mask. */
+    OFPBMC_BAD_WILDCARDS    = 5,  /* Unsupported combination of fields masked
+                                     or omitted in the match. */
+    OFPBMC_BAD_FIELD        = 6,  /* Unsupported field type in the match. */
+    OFPBMC_BAD_VALUE        = 7,  /* Unsupported value in a match field. */
+    OFPBMC_BAD_MASK         = 8,  /* Unsupported mask specified in the match,
+                                     field is not dl-address or nw-address. */
+    OFPBMC_BAD_PREREQ       = 9,  /* A prerequisite was not met. */
+    OFPBMC_DUP_FIELD        = 10, /* A field type was duplicated. */
+    OFPBMC_EPERM            = 11, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_FLOW_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_flow_mod_failed_code {
+    OFPFMFC_UNKNOWN      = 0,   /* Unspecified error. */
+    OFPFMFC_TABLE_FULL   = 1,   /* Flow not added because table was full. */
+    OFPFMFC_BAD_TABLE_ID = 2,   /* Table does not exist */
+    OFPFMFC_OVERLAP      = 3,   /* Attempted to add overlapping flow with
+                                   CHECK_OVERLAP flag set. */
+    OFPFMFC_EPERM        = 4,   /* Permissions error. */
+    OFPFMFC_BAD_TIMEOUT  = 5,   /* Flow not added because of unsupported
+                                   idle/hard timeout. */
+    OFPFMFC_BAD_COMMAND  = 6,   /* Unsupported or unknown command. */
+    OFPFMFC_BAD_FLAGS    = 7,   /* Unsupported or unknown flags. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_GROUP_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_group_mod_failed_code {
+    OFPGMFC_GROUP_EXISTS         = 0,  /* Group not added because a group ADD
+                                          attempted to replace an
+                                          already-present group. */
+    OFPGMFC_INVALID_GROUP        = 1,  /* Group not added because Group
+                                          specified is invalid. */
+    OFPGMFC_WEIGHT_UNSUPPORTED   = 2,  /* Switch does not support unequal load
+                                          sharing with select groups. */
+    OFPGMFC_OUT_OF_GROUPS        = 3,  /* The group table is full. */
+    OFPGMFC_OUT_OF_BUCKETS       = 4,  /* The maximum number of action buckets
+                                          for a group has been exceeded. */
+    OFPGMFC_CHAINING_UNSUPPORTED = 5,  /* Switch does not support groups that
+                                          forward to groups. */
+    OFPGMFC_WATCH_UNSUPPORTED    = 6,  /* This group cannot watch the watch_port
+                                          or watch_group specified. */
+    OFPGMFC_LOOP                 = 7,  /* Group entry would cause a loop. */
+    OFPGMFC_UNKNOWN_GROUP        = 8,  /* Group not modified because a group
+                                          MODIFY attempted to modify a
+                                          non-existent group. */
+    OFPGMFC_CHAINED_GROUP        = 9,  /* Group not deleted because another
+                                          group is forwarding to it. */
+    OFPGMFC_BAD_TYPE             = 10, /* Unsupported or unknown group type. */
+    OFPGMFC_BAD_COMMAND          = 11, /* Unsupported or unknown command. */
+    OFPGMFC_BAD_BUCKET           = 12, /* Error in bucket. */
+    OFPGMFC_BAD_WATCH            = 13, /* Error in watch port/group. */
+    OFPGMFC_EPERM                = 14, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_PORT_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_port_mod_failed_code {
+    OFPPMFC_BAD_PORT      = 0,   /* Specified port number does not exist. */
+    OFPPMFC_BAD_HW_ADDR   = 1,   /* Specified hardware address does not
+                                  * match the port number. */
+    OFPPMFC_BAD_CONFIG    = 2,   /* Specified config is invalid. */
+    OFPPMFC_BAD_ADVERTISE = 3,   /* Specified advertise is invalid. */
+    OFPPMFC_EPERM         = 4,   /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_TABLE_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_table_mod_failed_code {
+    OFPTMFC_BAD_TABLE  = 0,      /* Specified table does not exist. */
+    OFPTMFC_BAD_CONFIG = 1,      /* Specified config is invalid. */
+    OFPTMFC_EPERM      = 2,      /* Permissions error. */
+};
+
+/* ofp_error msg 'code' values for OFPET_QUEUE_OP_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request */
+enum ofp_queue_op_failed_code {
+    OFPQOFC_BAD_PORT   = 0,     /* Invalid port (or port does not exist). */
+    OFPQOFC_BAD_QUEUE  = 1,     /* Queue does not exist. */
+    OFPQOFC_EPERM      = 2,     /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_SWITCH_CONFIG_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_switch_config_failed_code {
+    OFPSCFC_BAD_FLAGS  = 0,      /* Specified flags is invalid. */
+    OFPSCFC_BAD_LEN    = 1,      /* Specified len is invalid. */
+    OFPQCFC_EPERM      = 2,      /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_ROLE_REQUEST_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_role_request_failed_code {
+    OFPRRFC_STALE      = 0,      /* Stale Message: old generation_id. */
+    OFPRRFC_UNSUP      = 1,      /* Controller role change unsupported. */
+    OFPRRFC_BAD_ROLE   = 2,      /* Invalid role. */
+};
+
+/* OFPT_ERROR: Error message (datapath -> controller). */
+struct ofp_error_msg {
+    struct ofp_header header;
+
+    uint16_t type;
+    uint16_t code;
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code.  No padding. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_msg) == 12);
+
+/* OFPET_EXPERIMENTER: Error message (datapath -> controller). */
+struct ofp_error_experimenter_msg {
+    struct ofp_header header;
+
+    uint16_t type;            /* OFPET_EXPERIMENTER. */
+    uint16_t exp_type;        /* Experimenter defined. */
+    uint32_t experimenter;    /* Experimenter ID which takes the same form
+                                 as in struct ofp_experimenter_header. */
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code.  No padding. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_experimenter_msg) == 16);
+
+enum ofp_stats_types {
+    /* Description of this OpenFlow switch.
+     * The request body is empty.
+     * The reply body is struct ofp_desc_stats. */
+    OFPST_DESC = 0,
+
+    /* Individual flow statistics.
+     * The request body is struct ofp_flow_stats_request.
+     * The reply body is an array of struct ofp_flow_stats. */
+    OFPST_FLOW = 1,
+
+    /* Aggregate flow statistics.
+     * The request body is struct ofp_aggregate_stats_request.
+     * The reply body is struct ofp_aggregate_stats_reply. */
+    OFPST_AGGREGATE = 2,
+
+    /* Flow table statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_table_stats. */
+    OFPST_TABLE = 3,
+
+    /* Port statistics.
+     * The request body is struct ofp_port_stats_request.
+     * The reply body is an array of struct ofp_port_stats. */
+    OFPST_PORT = 4,
+
+    /* Queue statistics for a port
+     * The request body is struct ofp_queue_stats_request.
+     * The reply body is an array of struct ofp_queue_stats */
+    OFPST_QUEUE = 5,
+
+    /* Group counter statistics.
+     * The request body is struct ofp_group_stats_request.
+     * The reply is an array of struct ofp_group_stats. */
+    OFPST_GROUP = 6,
+
+    /* Group description statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_group_desc_stats. */
+    OFPST_GROUP_DESC = 7,
+
+    /* Group features.
+     * The request body is empty.
+     * The reply body is struct ofp_group_features_stats. */
+    OFPST_GROUP_FEATURES = 8,
+
+    /* Experimenter extension.
+     * The request and reply bodies begin with
+     * struct ofp_experimenter_stats_header.
+     * The request and reply bodies are otherwise experimenter-defined. */
+    OFPST_EXPERIMENTER = 0xffff
+};
+
+struct ofp_stats_request {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REQ_* flags (none yet defined). */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the request. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_request) == 16);
+
+enum ofp_stats_reply_flags {
+    OFPSF_REPLY_MORE  = 1 << 0  /* More replies to follow. */
+};
+
+struct ofp_stats_reply {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REPLY_* flags. */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the reply. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_reply) == 16);
+
+#define DESC_STR_LEN   256
+#define SERIAL_NUM_LEN 32
+/* Body of reply to OFPST_DESC request.  Each entry is a NULL-terminated
+ * ASCII string. */
+struct ofp_desc_stats {
+    char mfr_desc[DESC_STR_LEN];       /* Manufacturer description. */
+    char hw_desc[DESC_STR_LEN];        /* Hardware description. */
+    char sw_desc[DESC_STR_LEN];        /* Software description. */
+    char serial_num[SERIAL_NUM_LEN];   /* Serial number. */
+    char dp_desc[DESC_STR_LEN];        /* Human readable description of datapath. */
+};
+OFP_ASSERT(sizeof(struct ofp_desc_stats) == 1056);
+
+/* Body for ofp_stats_request of type OFPST_FLOW. */
+struct ofp_flow_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats),
+                                 OFPTT_ALL for all tables. */
+    uint8_t pad[3];           /* Align to 32 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats_request) == 40);
+
+/* Body of reply to OFPST_FLOW request. */
+struct ofp_flow_stats {
+    uint16_t length;          /* Length of this entry. */
+    uint8_t table_id;         /* ID of table flow came from. */
+    uint8_t pad;
+    uint32_t duration_sec;    /* Time flow has been alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow has been alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t priority;        /* Priority of the entry. */
+    uint16_t idle_timeout;    /* Number of seconds idle before expiration. */
+    uint16_t hard_timeout;    /* Number of seconds before expiration. */
+    uint8_t pad2[6];          /* Align to 64-bits. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+    uint64_t packet_count;    /* Number of packets in flow. */
+    uint64_t byte_count;      /* Number of bytes in flow. */
+    struct ofp_match match;   /* Description of fields. Variable size. */
+    //struct ofp_instruction instructions[0]; /* Instruction set. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats) == 56);
+
+/* Body for ofp_stats_request of type OFPST_AGGREGATE. */
+struct ofp_aggregate_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats)
+                                 OFPTT_ALL for all tables. */
+    uint8_t pad[3];           /* Align to 32 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_request) == 40);
+
+/* Body of reply to OFPST_AGGREGATE request. */
+struct ofp_aggregate_stats_reply {
+    uint64_t packet_count;    /* Number of packets in flows. */
+    uint64_t byte_count;      /* Number of bytes in flows. */
+    uint32_t flow_count;      /* Number of flows. */
+    uint8_t pad[4];           /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_reply) == 24);
+
+/* Body of reply to OFPST_TABLE request. */
+struct ofp_table_stats {
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    char name[OFP_MAX_TABLE_NAME_LEN];
+    uint64_t match;          /* Bitmap of (1 << OFPXMT_*) that indicate the
+                                fields the table can match on. */
+    uint64_t wildcards;      /* Bitmap of (1 << OFPXMT_*) wildcards that are
+                                supported by the table. */
+    uint32_t write_actions;  /* Bitmap of OFPAT_* that are supported
+                                by the table with OFPIT_WRITE_ACTIONS. */
+    uint32_t apply_actions;  /* Bitmap of OFPAT_* that are supported
+                                by the table with OFPIT_APPLY_ACTIONS. */
+    uint64_t write_setfields;/* Bitmap of (1 << OFPXMT_*) header fields that
+                                can be set with OFPIT_WRITE_ACTIONS. */
+    uint64_t apply_setfields;/* Bitmap of (1 << OFPXMT_*) header fields that
+                                can be set with OFPIT_APPLY_ACTIONS. */
+    uint64_t metadata_match; /* Bits of metadata table can match. */
+    uint64_t metadata_write; /* Bits of metadata table can write. */
+    uint32_t instructions;   /* Bitmap of OFPIT_* values supported. */
+    uint32_t config;         /* Bitmap of OFPTC_* values */
+    uint32_t max_entries;    /* Max number of entries supported. */
+    uint32_t active_count;   /* Number of active entries. */
+    uint64_t lookup_count;   /* Number of packets looked up in table. */
+    uint64_t matched_count;  /* Number of packets that hit table. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_stats) == 128);
+
+/* Body for ofp_stats_request of type OFPST_PORT. */
+struct ofp_port_stats_request {
+    uint32_t port_no;        /* OFPST_PORT message must request statistics
+                              * either for a single port (specified in
+                              * port_no) or for all ports (if port_no ==
+                              * OFPP_ANY). */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats_request) == 8);
+
+/* Body of reply to OFPST_PORT request. If a counter is unsupported, set
+ * the field to all ones. */
+struct ofp_port_stats {
+    uint32_t port_no;
+    uint8_t pad[4];          /* Align to 64-bits. */
+    uint64_t rx_packets;     /* Number of received packets. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t rx_bytes;       /* Number of received bytes. */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t rx_dropped;     /* Number of packets dropped by RX. */
+    uint64_t tx_dropped;     /* Number of packets dropped by TX. */
+    uint64_t rx_errors;      /* Number of receive errors.  This is a super-set
+                                of more specific receive errors and should be
+                                greater than or equal to the sum of all
+                                rx_*_err values. */
+    uint64_t tx_errors;      /* Number of transmit errors.  This is a super-set
+                                of more specific transmit errors and should be
+                                greater than or equal to the sum of all
+                                tx_*_err values (none currently defined.) */
+    uint64_t rx_frame_err;   /* Number of frame alignment errors. */
+    uint64_t rx_over_err;    /* Number of packets with RX overrun. */
+    uint64_t rx_crc_err;     /* Number of CRC errors. */
+    uint64_t collisions;     /* Number of collisions. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats) == 104);
+
+/* Body of OFPST_GROUP request. */
+struct ofp_group_stats_request {
+    uint32_t group_id;       /* All groups if OFPG_ALL. */
+    uint8_t pad[4];          /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats_request) == 8);
+
+/* Used in group stats replies. */
+struct ofp_bucket_counter {
+    uint64_t packet_count;   /* Number of packets processed by bucket. */
+    uint64_t byte_count;     /* Number of bytes processed by bucket. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket_counter) == 16);
+
+/* Body of reply to OFPST_GROUP request. */
+struct ofp_group_stats {
+    uint16_t length;         /* Length of this entry. */
+    uint8_t pad[2];          /* Align to 64 bits. */
+    uint32_t group_id;       /* Group identifier. */
+    uint32_t ref_count;      /* Number of flows or groups that directly forward
+                                to this group. */
+    uint8_t pad2[4];         /* Align to 64 bits. */
+    uint64_t packet_count;   /* Number of packets processed by group. */
+    uint64_t byte_count;     /* Number of bytes processed by group. */
+    struct ofp_bucket_counter bucket_stats[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats) == 32);
+
+/* Body of reply to OFPST_GROUP_DESC request. */
+struct ofp_group_desc_stats {
+    uint16_t length;              /* Length of this entry. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_desc_stats) == 8);
+
+/* Group configuration flags */
+enum ofp_group_capabilities {
+    OFPGFC_SELECT_WEIGHT   = 1 << 0,  /* Support weight for select groups */
+    OFPGFC_SELECT_LIVENESS = 1 << 1,  /* Support liveness for select groups */
+    OFPGFC_CHAINING        = 1 << 2,  /* Support chaining groups */
+    OFPGFC_CHAINING_CHECKS = 1 << 3,  /* Check chaining for loops and delete */
+};
+
+/* Body of reply to OFPST_GROUP_FEATURES request. Group features. */
+struct ofp_group_features_stats {
+    uint32_t  types;           /* Bitmap of OFPGT_* values supported. */
+    uint32_t  capabilities;    /* Bitmap of OFPGFC_* capability supported. */
+    uint32_t  max_groups[4];   /* Maximum number of groups for each type. */
+    uint32_t  actions[4];      /* Bitmaps of OFPAT_* that are supported. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_features_stats) == 40);
+
+/* Body for ofp_stats_request/reply of type OFPST_EXPERIMENTER. */
+struct ofp_experimenter_stats_header {
+    uint32_t experimenter;    /* Experimenter ID which takes the same form
+                                 as in struct ofp_experimenter_header. */
+    uint32_t exp_type;        /* Experimenter defined. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_experimenter_stats_header) == 8);
+
+/* Experimenter extension. */
+struct ofp_experimenter_header {
+    struct ofp_header header;   /* Type OFPT_EXPERIMENTER. */
+    uint32_t experimenter;      /* Experimenter ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by ONF. */
+    uint32_t exp_type;          /* Experimenter defined. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_experimenter_header) == 16);
+
+/* All ones is used to indicate all queues in a port (for stats retrieval). */
+#define OFPQ_ALL      0xffffffff
+
+/* Min rate > 1000 means not configured. */
+#define OFPQ_MIN_RATE_UNCFG      0xffff
+
+/* Max rate > 1000 means not configured. */
+#define OFPQ_MAX_RATE_UNCFG      0xffff
+
+enum ofp_queue_properties {
+    OFPQT_MIN_RATE      = 1,      /* Minimum datarate guaranteed. */
+    OFPQT_MAX_RATE      = 2,      /* Maximum datarate. */
+    OFPQT_EXPERIMENTER  = 0xffff  /* Experimenter defined property. */
+};
+
+/* Common description for a queue. */
+struct ofp_queue_prop_header {
+    uint16_t property;    /* One of OFPQT_. */
+    uint16_t len;         /* Length of property, including this header. */
+    uint8_t pad[4];       /* 64-bit alignemnt. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_header) == 8);
+
+/* Min-Rate queue property description. */
+struct ofp_queue_prop_min_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MIN, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_min_rate) == 16);
+
+/* Max-Rate queue property description. */
+struct ofp_queue_prop_max_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MAX, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_max_rate) == 16);
+
+/* Experimenter queue property description. */
+struct ofp_queue_prop_experimenter {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_EXPERIMENTER, len: 16. */
+    uint32_t experimenter;          /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+    uint8_t pad[4];       /* 64-bit alignment */
+    uint8_t data[0];      /* Experimenter defined data. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_experimenter) == 16);
+
+/* Full description for a queue. */
+struct ofp_packet_queue {
+    uint32_t queue_id;     /* id for the specific queue. */
+    uint32_t port;         /* Port this queue is attached to. */
+    uint16_t len;          /* Length in bytes of this queue desc. */
+    uint8_t pad[6];        /* 64-bit alignment. */
+    struct ofp_queue_prop_header properties[0]; /* List of properties. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_queue) == 16);
+
+/* Query for port queue configuration. */
+struct ofp_queue_get_config_request {
+    struct ofp_header header;
+    uint32_t port;         /* Port to be queried. Should refer
+                              to a valid physical port (i.e. < OFPP_MAX),
+                              or OFPP_ANY to request all configured
+                              queues.*/
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_request) == 16);
+
+/* Queue configuration for a given port. */
+struct ofp_queue_get_config_reply {
+    struct ofp_header header;
+    uint32_t port;
+    uint8_t pad[4];
+    struct ofp_packet_queue queues[0]; /* List of configured queues. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_reply) == 16);
+
+/* OFPAT_SET_QUEUE action struct: send packets to given queue on port. */
+struct ofp_action_set_queue {
+    uint16_t type;            /* OFPAT_SET_QUEUE. */
+    uint16_t len;             /* Len is 8. */
+    uint32_t queue_id;        /* Queue id for the packets. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_set_queue) == 8);
+
+struct ofp_queue_stats_request {
+    uint32_t port_no;        /* All ports if OFPP_ANY. */
+    uint32_t queue_id;       /* All queues if OFPQ_ALL. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats_request) == 8);
+
+struct ofp_queue_stats {
+    uint32_t port_no;
+    uint32_t queue_id;       /* Queue i.d */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t tx_errors;      /* Number of packets dropped due to overrun. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats) == 32);
+
+/* Configures the "role" of the sending controller.  The default role is:
+ *
+ *    - Equal (NX_ROLE_EQUAL), which allows the controller access to all
+ *      OpenFlow features. All controllers have equal responsibility.
+ *
+ * The other possible roles are a related pair:
+ *
+ *    - Master (NX_ROLE_MASTER) is equivalent to Equal, except that there may
+ *      be at most one Master controller at a time: when a controller
+ *      configures itself as Master, any existing Master is demoted to the
+ *      Slave role.
+ *
+ *    - Slave (NX_ROLE_SLAVE) allows the controller read-only access to
+ *      OpenFlow features.  In particular attempts to modify the flow table
+ *      will be rejected with an OFPBRC_EPERM error.
+ *
+ *      Slave controllers do not receive OFPT_PACKET_IN or OFPT_FLOW_REMOVED
+ *      messages, but they do receive OFPT_PORT_STATUS messages.
+ */
+
+/* Controller roles. */
+enum ofp_controller_role {
+    OFPCR_ROLE_NOCHANGE = 0,    /* Don't change current role. */
+    OFPCR_ROLE_EQUAL    = 1,    /* Default role, full access. */
+    OFPCR_ROLE_MASTER   = 2,    /* Full access, at most one master. */
+    OFPCR_ROLE_SLAVE    = 3,    /* Read-only access. */
+};
+
+/* Role request and reply message. */
+struct ofp_role_request {
+    struct ofp_header header;   /* Type OFPT_ROLE_REQUEST/OFPT_ROLE_REPLY. */
+    uint32_t role;              /* One of NX_ROLE_*. */
+    uint8_t pad[4];             /* Align to 64 bits. */
+    uint64_t generation_id;     /* Master Election Generation Id */
+};
+OFP_ASSERT(sizeof(struct ofp_role_request) == 24);
+
+#endif /* openflow/openflow.h */
diff --git a/canonical/openflow.h-1.3 b/canonical/openflow.h-1.3
new file mode 100644
index 0000000..791ceda
--- /dev/null
+++ b/canonical/openflow.h-1.3
@@ -0,0 +1,2318 @@
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford
+ * Junior University
+ * Copyright (c) 2011, 2012 Open Networking Foundation
+ *
+ * We are making the OpenFlow specification and associated documentation
+ * (Software) available for public use and benefit with the expectation
+ * that others will use, modify and enhance the Software and contribute
+ * those enhancements back to the community. However, since we would
+ * like to make the Software available for broadest use, with as few
+ * restrictions as possible permission is hereby granted, free of
+ * charge, to any person obtaining a copy of this Software to deal in
+ * the Software under the copyrights without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT.  IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * The name and trademarks of copyright holder(s) may NOT be used in
+ * advertising or publicity pertaining to the Software or any
+ * derivatives without specific, written prior permission.
+ */
+
+/* OpenFlow: protocol between controller and datapath. */
+
+#ifndef OPENFLOW_OPENFLOW_H
+#define OPENFLOW_OPENFLOW_H 1
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+#ifdef SWIG
+#define OFP_ASSERT(EXPR)        /* SWIG can't handle OFP_ASSERT. */
+#elif !defined(__cplusplus)
+/* Build-time assertion for use in a declaration context. */
+#define OFP_ASSERT(EXPR)                                                \
+        extern int (*build_assert(void))[ sizeof(struct {               \
+                    unsigned int build_assert_failed : (EXPR) ? 1 : -1; })]
+#else /* __cplusplus */
+#define OFP_ASSERT(_EXPR) typedef int build_assert_failed[(_EXPR) ? 1 : -1]
+#endif /* __cplusplus */
+
+#ifndef SWIG
+#define OFP_PACKED __attribute__((packed))
+#else
+#define OFP_PACKED              /* SWIG doesn't understand __attribute. */
+#endif
+
+/* Version number:
+ * Non-experimental versions released: 0x01 = 1.0 ; 0x02 = 1.1 ; 0x03 = 1.2
+ *     0x04 = 1.3
+ * Experimental versions released: 0x81 -- 0x99
+ */
+/* The most significant bit being set in the version field indicates an
+ * experimental OpenFlow version.
+ */
+#define OFP_VERSION   0x04
+
+#define OFP_MAX_TABLE_NAME_LEN 32
+#define OFP_MAX_PORT_NAME_LEN  16
+
+#define OFP_TCP_PORT  6633
+#define OFP_SSL_PORT  6633
+
+#define OFP_ETH_ALEN 6          /* Bytes in an Ethernet address. */
+
+/* Port numbering. Ports are numbered starting from 1. */
+enum ofp_port_no {
+    /* Maximum number of physical and logical switch ports. */
+    OFPP_MAX        = 0xffffff00,
+
+    /* Reserved OpenFlow Port (fake output "ports"). */
+    OFPP_IN_PORT    = 0xfffffff8,  /* Send the packet out the input port.  This
+                                      reserved port must be explicitly used
+                                      in order to send back out of the input
+                                      port. */
+    OFPP_TABLE      = 0xfffffff9,  /* Submit the packet to the first flow table
+                                      NB: This destination port can only be
+                                      used in packet-out messages. */
+    OFPP_NORMAL     = 0xfffffffa,  /* Process with normal L2/L3 switching. */
+    OFPP_FLOOD      = 0xfffffffb,  /* All physical ports in VLAN, except input
+                                      port and those blocked or link down. */
+    OFPP_ALL        = 0xfffffffc,  /* All physical ports except input port. */
+    OFPP_CONTROLLER = 0xfffffffd,  /* Send to controller. */
+    OFPP_LOCAL      = 0xfffffffe,  /* Local openflow "port". */
+    OFPP_ANY        = 0xffffffff   /* Wildcard port used only for flow mod
+                                      (delete) and flow stats requests. Selects
+                                      all flows regardless of output port
+                                      (including flows with no output port). */
+};
+
+enum ofp_type {
+    /* Immutable messages. */
+    OFPT_HELLO              = 0,  /* Symmetric message */
+    OFPT_ERROR              = 1,  /* Symmetric message */
+    OFPT_ECHO_REQUEST       = 2,  /* Symmetric message */
+    OFPT_ECHO_REPLY         = 3,  /* Symmetric message */
+    OFPT_EXPERIMENTER       = 4,  /* Symmetric message */
+
+    /* Switch configuration messages. */
+    OFPT_FEATURES_REQUEST   = 5,  /* Controller/switch message */
+    OFPT_FEATURES_REPLY     = 6,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REQUEST = 7,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REPLY   = 8,  /* Controller/switch message */
+    OFPT_SET_CONFIG         = 9,  /* Controller/switch message */
+
+    /* Asynchronous messages. */
+    OFPT_PACKET_IN          = 10, /* Async message */
+    OFPT_FLOW_REMOVED       = 11, /* Async message */
+    OFPT_PORT_STATUS        = 12, /* Async message */
+
+    /* Controller command messages. */
+    OFPT_PACKET_OUT         = 13, /* Controller/switch message */
+    OFPT_FLOW_MOD           = 14, /* Controller/switch message */
+    OFPT_GROUP_MOD          = 15, /* Controller/switch message */
+    OFPT_PORT_MOD           = 16, /* Controller/switch message */
+    OFPT_TABLE_MOD          = 17, /* Controller/switch message */
+
+    /* Multipart messages. */
+    OFPT_MULTIPART_REQUEST      = 18, /* Controller/switch message */
+    OFPT_MULTIPART_REPLY        = 19, /* Controller/switch message */
+
+    /* Barrier messages. */
+    OFPT_BARRIER_REQUEST    = 20, /* Controller/switch message */
+    OFPT_BARRIER_REPLY      = 21, /* Controller/switch message */
+
+    /* Queue Configuration messages. */
+    OFPT_QUEUE_GET_CONFIG_REQUEST = 22,  /* Controller/switch message */
+    OFPT_QUEUE_GET_CONFIG_REPLY   = 23,  /* Controller/switch message */
+
+    /* Controller role change request messages. */
+    OFPT_ROLE_REQUEST       = 24, /* Controller/switch message */
+    OFPT_ROLE_REPLY         = 25, /* Controller/switch message */
+
+    /* Asynchronous message configuration. */
+    OFPT_GET_ASYNC_REQUEST  = 26, /* Controller/switch message */
+    OFPT_GET_ASYNC_REPLY    = 27, /* Controller/switch message */
+    OFPT_SET_ASYNC          = 28, /* Controller/switch message */
+
+    /* Meters and rate limiters configuration messages. */
+    OFPT_METER_MOD          = 29, /* Controller/switch message */
+};
+
+/* Header on all OpenFlow packets. */
+struct ofp_header {
+    uint8_t version;    /* OFP_VERSION. */
+    uint8_t type;       /* One of the OFPT_ constants. */
+    uint16_t length;    /* Length including this ofp_header. */
+    uint32_t xid;       /* Transaction id associated with this packet.
+                           Replies use the same id as was in the request
+                           to facilitate pairing. */
+};
+OFP_ASSERT(sizeof(struct ofp_header) == 8);
+
+/* Hello elements types.
+ */
+enum ofp_hello_elem_type {
+    OFPHET_VERSIONBITMAP          = 1,  /* Bitmap of version supported. */
+};
+
+/* Common header for all Hello Elements */
+struct ofp_hello_elem_header {
+    uint16_t         type;    /* One of OFPHET_*. */
+    uint16_t         length;  /* Length in bytes of this element. */
+};
+OFP_ASSERT(sizeof(struct ofp_hello_elem_header) == 4);
+
+/* Version bitmap Hello Element */
+struct ofp_hello_elem_versionbitmap {
+    uint16_t         type;    /* OFPHET_VERSIONBITMAP. */
+    uint16_t         length;  /* Length in bytes of this element. */
+    /* Followed by:
+     *   - Exactly (length - 4) bytes containing the bitmaps, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    uint32_t         bitmaps[0];   /* List of bitmaps - supported versions */
+};
+OFP_ASSERT(sizeof(struct ofp_hello_elem_versionbitmap) == 4);
+
+/* OFPT_HELLO.  This message includes zero or more hello elements having
+ * variable size. Unknown elements types must be ignored/skipped, to allow
+ * for future extensions. */
+struct ofp_hello {
+    struct ofp_header header;
+
+    /* Hello element list */
+    struct ofp_hello_elem_header elements[0];
+};
+OFP_ASSERT(sizeof(struct ofp_hello) == 8);
+
+#define OFP_DEFAULT_MISS_SEND_LEN   128
+
+enum ofp_config_flags {
+    /* Handling of IP fragments. */
+    OFPC_FRAG_NORMAL   = 0,       /* No special handling for fragments. */
+    OFPC_FRAG_DROP     = 1 << 0,  /* Drop fragments. */
+    OFPC_FRAG_REASM    = 1 << 1,  /* Reassemble (only if OFPC_IP_REASM set). */
+    OFPC_FRAG_MASK     = 3,
+};
+
+/* Switch configuration. */
+struct ofp_switch_config {
+    struct ofp_header header;
+    uint16_t flags;             /* OFPC_* flags. */
+    uint16_t miss_send_len;     /* Max bytes of packet that datapath
+                                   should send to the controller. See
+                                   ofp_controller_max_len for valid values.
+                                   */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_config) == 12);
+
+/* Flags to configure the table. Reserved for future use. */
+enum ofp_table_config {
+    OFPTC_DEPRECATED_MASK       = 3,  /* Deprecated bits */
+};
+
+/* Table numbering. Tables can use any number up to OFPT_MAX. */
+enum ofp_table {
+    /* Last usable table number. */
+    OFPTT_MAX        = 0xfe,
+
+    /* Fake tables. */
+    OFPTT_ALL        = 0xff   /* Wildcard table used for table config,
+                                 flow stats and flow deletes. */
+};
+
+
+/* Configure/Modify behavior of a flow table */
+struct ofp_table_mod {
+    struct ofp_header header;
+    uint8_t table_id;       /* ID of the table, OFPTT_ALL indicates all tables */
+    uint8_t pad[3];         /* Pad to 32 bits */
+    uint32_t config;        /* Bitmap of OFPTC_* flags */
+};
+OFP_ASSERT(sizeof(struct ofp_table_mod) == 16);
+
+/* Capabilities supported by the datapath. */
+enum ofp_capabilities {
+    OFPC_FLOW_STATS     = 1 << 0,  /* Flow statistics. */
+    OFPC_TABLE_STATS    = 1 << 1,  /* Table statistics. */
+    OFPC_PORT_STATS     = 1 << 2,  /* Port statistics. */
+    OFPC_GROUP_STATS    = 1 << 3,  /* Group statistics. */
+    OFPC_IP_REASM       = 1 << 5,  /* Can reassemble IP fragments. */
+    OFPC_QUEUE_STATS    = 1 << 6,  /* Queue statistics. */
+    OFPC_PORT_BLOCKED   = 1 << 8   /* Switch will block looping ports. */
+};
+
+/* Flags to indicate behavior of the physical port.  These flags are
+ * used in ofp_port to describe the current configuration.  They are
+ * used in the ofp_port_mod message to configure the port's behavior.
+ */
+enum ofp_port_config {
+    OFPPC_PORT_DOWN    = 1 << 0,  /* Port is administratively down. */
+
+    OFPPC_NO_RECV      = 1 << 2,  /* Drop all packets received by port. */
+    OFPPC_NO_FWD       = 1 << 5,  /* Drop packets forwarded to port. */
+    OFPPC_NO_PACKET_IN = 1 << 6   /* Do not send packet-in msgs for port. */
+};
+
+/* Current state of the physical port.  These are not configurable from
+ * the controller.
+ */
+enum ofp_port_state {
+    OFPPS_LINK_DOWN    = 1 << 0,  /* No physical link present. */
+    OFPPS_BLOCKED      = 1 << 1,  /* Port is blocked */
+    OFPPS_LIVE         = 1 << 2,  /* Live for Fast Failover Group. */
+};
+
+/* Features of ports available in a datapath. */
+enum ofp_port_features {
+    OFPPF_10MB_HD    = 1 << 0,  /* 10 Mb half-duplex rate support. */
+    OFPPF_10MB_FD    = 1 << 1,  /* 10 Mb full-duplex rate support. */
+    OFPPF_100MB_HD   = 1 << 2,  /* 100 Mb half-duplex rate support. */
+    OFPPF_100MB_FD   = 1 << 3,  /* 100 Mb full-duplex rate support. */
+    OFPPF_1GB_HD     = 1 << 4,  /* 1 Gb half-duplex rate support. */
+    OFPPF_1GB_FD     = 1 << 5,  /* 1 Gb full-duplex rate support. */
+    OFPPF_10GB_FD    = 1 << 6,  /* 10 Gb full-duplex rate support. */
+    OFPPF_40GB_FD    = 1 << 7,  /* 40 Gb full-duplex rate support. */
+    OFPPF_100GB_FD   = 1 << 8,  /* 100 Gb full-duplex rate support. */
+    OFPPF_1TB_FD     = 1 << 9,  /* 1 Tb full-duplex rate support. */
+    OFPPF_OTHER      = 1 << 10, /* Other rate, not in the list. */
+
+    OFPPF_COPPER     = 1 << 11, /* Copper medium. */
+    OFPPF_FIBER      = 1 << 12, /* Fiber medium. */
+    OFPPF_AUTONEG    = 1 << 13, /* Auto-negotiation. */
+    OFPPF_PAUSE      = 1 << 14, /* Pause. */
+    OFPPF_PAUSE_ASYM = 1 << 15  /* Asymmetric pause. */
+};
+
+/* Description of a port */
+struct ofp_port {
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN];
+    uint8_t pad2[2];                  /* Align to 64 bits. */
+    char name[OFP_MAX_PORT_NAME_LEN]; /* Null-terminated */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t state;         /* Bitmap of OFPPS_* flags. */
+
+    /* Bitmaps of OFPPF_* that describe features.  All bits zeroed if
+     * unsupported or unavailable. */
+    uint32_t curr;          /* Current features. */
+    uint32_t advertised;    /* Features being advertised by the port. */
+    uint32_t supported;     /* Features supported by the port. */
+    uint32_t peer;          /* Features advertised by peer. */
+
+    uint32_t curr_speed;    /* Current port bitrate in kbps. */
+    uint32_t max_speed;     /* Max port bitrate in kbps */
+};
+OFP_ASSERT(sizeof(struct ofp_port) == 64);
+
+/* Switch features. */
+struct ofp_switch_features {
+    struct ofp_header header;
+    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
+                               a MAC address, while the upper 16-bits are
+                               implementer-defined. */
+
+    uint32_t n_buffers;     /* Max packets buffered at once. */
+
+    uint8_t n_tables;       /* Number of tables supported by datapath. */
+    uint8_t auxiliary_id;   /* Identify auxiliary connections */
+    uint8_t pad[2];         /* Align to 64-bits. */
+
+    /* Features. */
+    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
+    uint32_t reserved;
+};
+OFP_ASSERT(sizeof(struct ofp_switch_features) == 32);
+
+/* What changed about the physical port */
+enum ofp_port_reason {
+    OFPPR_ADD     = 0,         /* The port was added. */
+    OFPPR_DELETE  = 1,         /* The port was removed. */
+    OFPPR_MODIFY  = 2,         /* Some attribute of the port has changed. */
+};
+
+/* A physical port has changed in the datapath */
+struct ofp_port_status {
+    struct ofp_header header;
+    uint8_t reason;          /* One of OFPPR_*. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    struct ofp_port desc;
+};
+OFP_ASSERT(sizeof(struct ofp_port_status) == 80);
+
+/* Modify behavior of the physical port */
+struct ofp_port_mod {
+    struct ofp_header header;
+    uint32_t port_no;
+    uint8_t pad[4];
+    uint8_t hw_addr[OFP_ETH_ALEN]; /* The hardware address is not
+                                      configurable.  This is used to
+                                      sanity-check the request, so it must
+                                      be the same as returned in an
+                                      ofp_port struct. */
+    uint8_t pad2[2];        /* Pad to 64 bits. */
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t mask;          /* Bitmap of OFPPC_* flags to be changed. */
+
+    uint32_t advertise;     /* Bitmap of OFPPF_*.  Zero all bits to prevent
+                               any action taking place. */
+    uint8_t pad3[4];        /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_mod) == 40);
+
+/* ## -------------------------- ## */
+/* ## OpenFlow Extensible Match. ## */
+/* ## -------------------------- ## */
+
+/* The match type indicates the match structure (set of fields that compose the
+ * match) in use. The match type is placed in the type field at the beginning
+ * of all match structures. The "OpenFlow Extensible Match" type corresponds
+ * to OXM TLV format described below and must be supported by all OpenFlow
+ * switches. Extensions that define other match types may be published on the
+ * ONF wiki. Support for extensions is optional.
+ */
+enum ofp_match_type {
+    OFPMT_STANDARD = 0,       /* Deprecated. */
+    OFPMT_OXM      = 1,       /* OpenFlow Extensible Match */
+};
+
+/* Fields to match against flows */
+struct ofp_match {
+    uint16_t type;             /* One of OFPMT_* */
+    uint16_t length;           /* Length of ofp_match (excluding padding) */
+    /* Followed by:
+     *   - Exactly (length - 4) (possibly 0) bytes containing OXM TLVs, then
+     *   - Exactly ((length + 7)/8*8 - length) (between 0 and 7) bytes of
+     *     all-zero bytes
+     * In summary, ofp_match is padded as needed, to make its overall size
+     * a multiple of 8, to preserve alignement in structures using it.
+     */
+    uint8_t oxm_fields[4];     /* OXMs start here - Make compiler happy */
+};
+OFP_ASSERT(sizeof(struct ofp_match) == 8);
+
+/* Components of a OXM TLV header. */
+#define OXM_HEADER__(CLASS, FIELD, HASMASK, LENGTH) \
+    (((CLASS) << 16) | ((FIELD) << 9) | ((HASMASK) << 8) | (LENGTH))
+#define OXM_HEADER(CLASS, FIELD, LENGTH) \
+    OXM_HEADER__(CLASS, FIELD, 0, LENGTH)
+#define OXM_HEADER_W(CLASS, FIELD, LENGTH) \
+    OXM_HEADER__(CLASS, FIELD, 1, (LENGTH) * 2)
+#define OXM_CLASS(HEADER) ((HEADER) >> 16)
+#define OXM_FIELD(HEADER) (((HEADER) >> 9) & 0x7f)
+#define OXM_TYPE(HEADER) (((HEADER) >> 9) & 0x7fffff)
+#define OXM_HASMASK(HEADER) (((HEADER) >> 8) & 1)
+#define OXM_LENGTH(HEADER) ((HEADER) & 0xff)
+
+#define OXM_MAKE_WILD_HEADER(HEADER) \
+        OXM_HEADER_W(OXM_CLASS(HEADER), OXM_FIELD(HEADER), OXM_LENGTH(HEADER))
+
+/* OXM Class IDs.
+ * The high order bit differentiate reserved classes from member classes.
+ * Classes 0x0000 to 0x7FFF are member classes, allocated by ONF.
+ * Classes 0x8000 to 0xFFFE are reserved classes, reserved for standardisation.
+ */
+enum ofp_oxm_class {
+    OFPXMC_NXM_0          = 0x0000,    /* Backward compatibility with NXM */
+    OFPXMC_NXM_1          = 0x0001,    /* Backward compatibility with NXM */
+    OFPXMC_OPENFLOW_BASIC = 0x8000,    /* Basic class for OpenFlow */
+    OFPXMC_EXPERIMENTER   = 0xFFFF,    /* Experimenter class */
+};
+
+/* OXM Flow match field types for OpenFlow basic class. */
+enum oxm_ofb_match_fields {
+    OFPXMT_OFB_IN_PORT        = 0,  /* Switch input port. */
+    OFPXMT_OFB_IN_PHY_PORT    = 1,  /* Switch physical input port. */
+    OFPXMT_OFB_METADATA       = 2,  /* Metadata passed between tables. */
+    OFPXMT_OFB_ETH_DST        = 3,  /* Ethernet destination address. */
+    OFPXMT_OFB_ETH_SRC        = 4,  /* Ethernet source address. */
+    OFPXMT_OFB_ETH_TYPE       = 5,  /* Ethernet frame type. */
+    OFPXMT_OFB_VLAN_VID       = 6,  /* VLAN id. */
+    OFPXMT_OFB_VLAN_PCP       = 7,  /* VLAN priority. */
+    OFPXMT_OFB_IP_DSCP        = 8,  /* IP DSCP (6 bits in ToS field). */
+    OFPXMT_OFB_IP_ECN         = 9,  /* IP ECN (2 bits in ToS field). */
+    OFPXMT_OFB_IP_PROTO       = 10, /* IP protocol. */
+    OFPXMT_OFB_IPV4_SRC       = 11, /* IPv4 source address. */
+    OFPXMT_OFB_IPV4_DST       = 12, /* IPv4 destination address. */
+    OFPXMT_OFB_TCP_SRC        = 13, /* TCP source port. */
+    OFPXMT_OFB_TCP_DST        = 14, /* TCP destination port. */
+    OFPXMT_OFB_UDP_SRC        = 15, /* UDP source port. */
+    OFPXMT_OFB_UDP_DST        = 16, /* UDP destination port. */
+    OFPXMT_OFB_SCTP_SRC       = 17, /* SCTP source port. */
+    OFPXMT_OFB_SCTP_DST       = 18, /* SCTP destination port. */
+    OFPXMT_OFB_ICMPV4_TYPE    = 19, /* ICMP type. */
+    OFPXMT_OFB_ICMPV4_CODE    = 20, /* ICMP code. */
+    OFPXMT_OFB_ARP_OP         = 21, /* ARP opcode. */
+    OFPXMT_OFB_ARP_SPA        = 22, /* ARP source IPv4 address. */
+    OFPXMT_OFB_ARP_TPA        = 23, /* ARP target IPv4 address. */
+    OFPXMT_OFB_ARP_SHA        = 24, /* ARP source hardware address. */
+    OFPXMT_OFB_ARP_THA        = 25, /* ARP target hardware address. */
+    OFPXMT_OFB_IPV6_SRC       = 26, /* IPv6 source address. */
+    OFPXMT_OFB_IPV6_DST       = 27, /* IPv6 destination address. */
+    OFPXMT_OFB_IPV6_FLABEL    = 28, /* IPv6 Flow Label */
+    OFPXMT_OFB_ICMPV6_TYPE    = 29, /* ICMPv6 type. */
+    OFPXMT_OFB_ICMPV6_CODE    = 30, /* ICMPv6 code. */
+    OFPXMT_OFB_IPV6_ND_TARGET = 31, /* Target address for ND. */
+    OFPXMT_OFB_IPV6_ND_SLL    = 32, /* Source link-layer for ND. */
+    OFPXMT_OFB_IPV6_ND_TLL    = 33, /* Target link-layer for ND. */
+    OFPXMT_OFB_MPLS_LABEL     = 34, /* MPLS label. */
+    OFPXMT_OFB_MPLS_TC        = 35, /* MPLS TC. */
+    OFPXMT_OFP_MPLS_BOS       = 36, /* MPLS BoS bit. */
+    OFPXMT_OFB_PBB_ISID       = 37, /* PBB I-SID. */
+    OFPXMT_OFB_TUNNEL_ID      = 38, /* Logical Port Metadata. */
+    OFPXMT_OFB_IPV6_EXTHDR    = 39, /* IPv6 Extension Header pseudo-field */
+};
+
+#define OFPXMT_OFB_ALL    ((UINT64_C(1) << 40) - 1)
+
+/* OpenFlow port on which the packet was received.
+ * May be a physical port, a logical port, or the reserved port OFPP_LOCAL
+ *
+ * Prereqs: None.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IN_PORT    OXM_HEADER  (0x8000, OFPXMT_OFB_IN_PORT, 4)
+
+/* Physical port on which the packet was received.
+ *
+ * Consider a packet received on a tunnel interface defined over a link
+ * aggregation group (LAG) with two physical port members.  If the tunnel
+ * interface is the logical port bound to OpenFlow.  In this case,
+ * OFPXMT_OF_IN_PORT is the tunnel's port number and OFPXMT_OF_IN_PHY_PORT is
+ * the physical port number of the LAG on which the tunnel is configured.
+ *
+ * When a packet is received directly on a physical port and not processed by a
+ * logical port, OFPXMT_OF_IN_PORT and OFPXMT_OF_IN_PHY_PORT have the same
+ * value.
+ *
+ * This field is usually not available in a regular match and only available
+ * in ofp_packet_in messages when it's different from OXM_OF_IN_PORT.
+ *
+ * Prereqs: OXM_OF_IN_PORT must be present.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IN_PHY_PORT OXM_HEADER  (0x8000, OFPXMT_OFB_IN_PHY_PORT, 4)
+
+/* Table metadata.
+ *
+ * Prereqs: None.
+ *
+ * Format: 64-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_METADATA   OXM_HEADER  (0x8000, OFPXMT_OFB_METADATA, 8)
+#define OXM_OF_METADATA_W OXM_HEADER_W(0x8000, OFPXMT_OFB_METADATA, 8)
+
+/* Source or destination address in Ethernet header.
+ *
+ * Prereqs: None.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Arbitrary masks. */
+#define OXM_OF_ETH_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_DST, 6)
+#define OXM_OF_ETH_DST_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ETH_DST, 6)
+#define OXM_OF_ETH_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_SRC, 6)
+#define OXM_OF_ETH_SRC_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ETH_SRC, 6)
+
+/* Packet's Ethernet type.
+ *
+ * Prereqs: None.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ETH_TYPE   OXM_HEADER  (0x8000, OFPXMT_OFB_ETH_TYPE, 2)
+
+/* The VLAN id is 12-bits, so we can use the entire 16 bits to indicate
+ * special conditions.
+ */
+enum ofp_vlan_id {
+    OFPVID_PRESENT = 0x1000, /* Bit that indicate that a VLAN id is set */
+    OFPVID_NONE    = 0x0000, /* No VLAN id was set. */
+};
+/* Define for compatibility */
+#define OFP_VLAN_NONE      OFPVID_NONE
+
+/* 802.1Q VID.
+ *
+ * For a packet with an 802.1Q header, this is the VLAN-ID (VID) from the
+ * outermost tag, with the CFI bit forced to 1. For a packet with no 802.1Q
+ * header, this has value OFPVID_NONE.
+ *
+ * Prereqs: None.
+ *
+ * Format: 16-bit integer in network byte order with bit 13 indicating
+ * presence of VLAN header and 3 most-significant bits forced to 0.
+ * Only the lower 13 bits have meaning.
+ *
+ * Masking: Arbitrary masks.
+ *
+ * This field can be used in various ways:
+ *
+ *   - If it is not constrained at all, the nx_match matches packets without
+ *     an 802.1Q header or with an 802.1Q header that has any VID value.
+ *
+ *   - Testing for an exact match with 0x0 matches only packets without
+ *     an 802.1Q header.
+ *
+ *   - Testing for an exact match with a VID value with CFI=1 matches packets
+ *     that have an 802.1Q header with a specified VID.
+ *
+ *   - Testing for an exact match with a nonzero VID value with CFI=0 does
+ *     not make sense.  The switch may reject this combination.
+ *
+ *   - Testing with nxm_value=0, nxm_mask=0x0fff matches packets with no 802.1Q
+ *     header or with an 802.1Q header with a VID of 0.
+ *
+ *   - Testing with nxm_value=0x1000, nxm_mask=0x1000 matches packets with
+ *     an 802.1Q header that has any VID value.
+ */
+#define OXM_OF_VLAN_VID   OXM_HEADER  (0x8000, OFPXMT_OFB_VLAN_VID, 2)
+#define OXM_OF_VLAN_VID_W OXM_HEADER_W(0x8000, OFPXMT_OFB_VLAN_VID, 2)
+
+/* 802.1Q PCP.
+ *
+ * For a packet with an 802.1Q header, this is the VLAN-PCP from the
+ * outermost tag.  For a packet with no 802.1Q header, this has value
+ * 0.
+ *
+ * Prereqs: OXM_OF_VLAN_VID must be different from OFPVID_NONE.
+ *
+ * Format: 8-bit integer with 5 most-significant bits forced to 0.
+ * Only the lower 3 bits have meaning.
+ *
+ * Masking: Not maskable.
+ */
+#define OXM_OF_VLAN_PCP   OXM_HEADER  (0x8000, OFPXMT_OFB_VLAN_PCP, 1)
+
+/* The Diff Serv Code Point (DSCP) bits of the IP header.
+ * Part of the IPv4 ToS field or the IPv6 Traffic Class field.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer with 2 most-significant bits forced to 0.
+ * Only the lower 6 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_DSCP     OXM_HEADER  (0x8000, OFPXMT_OFB_IP_DSCP, 1)
+
+/* The ECN bits of the IP header.
+ * Part of the IPv4 ToS field or the IPv6 Traffic Class field.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer with 6 most-significant bits forced to 0.
+ * Only the lower 2 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_ECN     OXM_HEADER  (0x8000, OFPXMT_OFB_IP_ECN, 1)
+
+/* The "protocol" byte in the IP header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IP_PROTO   OXM_HEADER  (0x8000, OFPXMT_OFB_IP_PROTO, 1)
+
+/* The source or destination address in the IP header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0800 exactly.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_IPV4_SRC     OXM_HEADER  (0x8000, OFPXMT_OFB_IPV4_SRC, 4)
+#define OXM_OF_IPV4_SRC_W   OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV4_SRC, 4)
+#define OXM_OF_IPV4_DST     OXM_HEADER  (0x8000, OFPXMT_OFB_IPV4_DST, 4)
+#define OXM_OF_IPV4_DST_W   OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV4_DST, 4)
+
+/* The source or destination port in the TCP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must be either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 6 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_TCP_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_TCP_SRC, 2)
+#define OXM_OF_TCP_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_TCP_DST, 2)
+
+/* The source or destination port in the UDP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 17 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_UDP_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_UDP_SRC, 2)
+#define OXM_OF_UDP_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_UDP_DST, 2)
+
+/* The source or destination port in the SCTP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match either 0x0800 or 0x86dd.
+ *   OXM_OF_IP_PROTO must match 132 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_SCTP_SRC   OXM_HEADER  (0x8000, OFPXMT_OFB_SCTP_SRC, 2)
+#define OXM_OF_SCTP_DST   OXM_HEADER  (0x8000, OFPXMT_OFB_SCTP_DST, 2)
+
+/* The type or code in the ICMP header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x0800 exactly.
+ *   OXM_OF_IP_PROTO must match 1 exactly.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ICMPV4_TYPE  OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV4_TYPE, 1)
+#define OXM_OF_ICMPV4_CODE  OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV4_CODE, 1)
+
+/* ARP opcode.
+ *
+ * For an Ethernet+IP ARP packet, the opcode in the ARP header.  Always 0
+ * otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 16-bit integer in network byte order.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ARP_OP     OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_OP, 2)
+
+/* For an Ethernet+IP ARP packet, the source or target protocol address
+ * in the ARP header.  Always 0 otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 32-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_ARP_SPA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_SPA, 4)
+#define OXM_OF_ARP_SPA_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ARP_SPA, 4)
+#define OXM_OF_ARP_TPA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_TPA, 4)
+#define OXM_OF_ARP_TPA_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_ARP_TPA, 4)
+
+/* For an Ethernet+IP ARP packet, the source or target hardware address
+ * in the ARP header.  Always 0 otherwise.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x0806 exactly.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ARP_SHA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_SHA, 6)
+#define OXM_OF_ARP_THA    OXM_HEADER  (0x8000, OFPXMT_OFB_ARP_THA, 6)
+
+/* The source or destination address in the IPv6 header.
+ *
+ * Prereqs: OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *
+ * Format: 128-bit IPv6 address.
+ *
+ * Masking: Arbitrary masks.
+ */
+#define OXM_OF_IPV6_SRC    OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_SRC, 16)
+#define OXM_OF_IPV6_SRC_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV6_SRC, 16)
+#define OXM_OF_IPV6_DST    OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_DST, 16)
+#define OXM_OF_IPV6_DST_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV6_DST, 16)
+
+/* The IPv6 Flow Label
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly
+ *
+ * Format: 32-bit integer with 12 most-significant bits forced to 0.
+ * Only the lower 20 bits have meaning.
+ *
+ * Masking: Maskable. */
+#define OXM_OF_IPV6_FLABEL   OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_FLABEL, 4)
+
+/* The type or code in the ICMPv6 header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *
+ * Format: 8-bit integer.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_ICMPV6_TYPE OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV6_TYPE, 1)
+#define OXM_OF_ICMPV6_CODE OXM_HEADER  (0x8000, OFPXMT_OFB_ICMPV6_CODE, 1)
+
+/* The target address in an IPv6 Neighbor Discovery message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be either 135 or 136.
+ *
+ * Format: 128-bit IPv6 address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_TARGET OXM_HEADER (0x8000, OFPXMT_OFB_IPV6_ND_TARGET, 16)
+
+/* The source link-layer address option in an IPv6 Neighbor Discovery
+ * message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be exactly 135.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_SLL  OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_ND_SLL, 6)
+
+/* The target link-layer address option in an IPv6 Neighbor Discovery
+ * message.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly.
+ *   OXM_OF_IP_PROTO must match 58 exactly.
+ *   OXM_OF_ICMPV6_TYPE must be exactly 136.
+ *
+ * Format: 48-bit Ethernet MAC address.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_IPV6_ND_TLL  OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_ND_TLL, 6)
+
+/* The LABEL in the first MPLS shim header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x8847 or 0x8848 exactly.
+ *
+ * Format: 32-bit integer in network byte order with 12 most-significant
+ * bits forced to 0. Only the lower 20 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_MPLS_LABEL  OXM_HEADER  (0x8000, OFPXMT_OFB_MPLS_LABEL, 4)
+
+/* The TC in the first MPLS shim header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x8847 or 0x8848 exactly.
+ *
+ * Format: 8-bit integer with 5 most-significant bits forced to 0.
+ * Only the lower 3 bits have meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_MPLS_TC     OXM_HEADER  (0x8000, OFPXMT_OFB_MPLS_TC, 1)
+
+/* The BoS bit in the first MPLS shim header.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x8847 or 0x8848 exactly.
+ *
+ * Format: 8-bit integer with 7 most-significant bits forced to 0.
+ * Only the lowest bit have a meaning.
+ *
+ * Masking: Not maskable. */
+#define OXM_OF_MPLS_BOS     OXM_HEADER  (0x8000, OFPXMT_OFB_MPLS_BOS, 1)
+
+/* IEEE 802.1ah I-SID.
+ *
+ * For a packet with a PBB header, this is the I-SID from the
+ * outermost service tag.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x88E7 exactly.
+ *
+ * Format: 24-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks. */
+#define OXM_OF_PBB_ISID   OXM_HEADER  (0x8000, OFPXMT_OFB_PBB_ISID, 3)
+#define OXM_OF_PBB_ISID_W OXM_HEADER_W(0x8000, OFPXMT_OFB_PBB_ISID, 3)
+
+/* Logical Port Metadata.
+ *
+ * Metadata associated with a logical port.
+ * If the logical port performs encapsulation and decapsulation, this
+ * is the demultiplexing field from the encapsulation header.
+ * For example, for a packet received via GRE tunnel including a (32-bit) key,
+ * the key is stored in the low 32-bits and the high bits are zeroed.
+ * For a MPLS logical port, the low 20 bits represent the MPLS Label.
+ * For a VxLAN logical port, the low 24 bits represent the VNI.
+ * If the packet is not received through a logical port, the value is 0.
+ *
+ * Prereqs: None.
+ *
+ * Format: 64-bit integer in network byte order.
+ *
+ * Masking: Arbitrary masks. */
+#define OXM_OF_TUNNEL_ID    OXM_HEADER  (0x8000, OFPXMT_OFB_TUNNEL_ID, 8)
+#define OXM_OF_TUNNEL_ID_W  OXM_HEADER_W(0x8000, OFPXMT_OFB_TUNNEL_ID, 8)
+
+/* The IPv6 Extension Header pseudo-field.
+ *
+ * Prereqs:
+ *   OXM_OF_ETH_TYPE must match 0x86dd exactly
+ *
+ * Format: 16-bit integer with 7 most-significant bits forced to 0.
+ * Only the lower 9 bits have meaning.
+ *
+ * Masking: Maskable. */
+#define OXM_OF_IPV6_EXTHDR   OXM_HEADER  (0x8000, OFPXMT_OFB_IPV6_EXTHDR, 2)
+#define OXM_OF_IPV6_EXTHDR_W OXM_HEADER_W(0x8000, OFPXMT_OFB_IPV6_EXTHDR, 2)
+
+/* Bit definitions for IPv6 Extension Header pseudo-field. */
+enum ofp_ipv6exthdr_flags {      
+    OFPIEH_NONEXT = 1 << 0,     /* "No next header" encountered. */
+    OFPIEH_ESP    = 1 << 1,     /* Encrypted Sec Payload header present. */
+    OFPIEH_AUTH   = 1 << 2,     /* Authentication header present. */
+    OFPIEH_DEST   = 1 << 3,     /* 1 or 2 dest headers present. */
+    OFPIEH_FRAG   = 1 << 4,     /* Fragment header present. */
+    OFPIEH_ROUTER = 1 << 5,     /* Router header present. */
+    OFPIEH_HOP    = 1 << 6,     /* Hop-by-hop header present. */
+    OFPIEH_UNREP  = 1 << 7,     /* Unexpected repeats encountered. */
+    OFPIEH_UNSEQ  = 1 << 8,     /* Unexpected sequencing encountered. */
+};
+
+/* Header for OXM experimenter match fields. */
+struct ofp_oxm_experimenter_header {
+    uint32_t oxm_header;        /* oxm_class = OFPXMC_EXPERIMENTER */
+    uint32_t experimenter;      /* Experimenter ID which takes the same
+                                   form as in struct ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_oxm_experimenter_header) == 8);
+
+/* ## ----------------- ## */
+/* ## OpenFlow Actions. ## */
+/* ## ----------------- ## */
+
+enum ofp_action_type {
+    OFPAT_OUTPUT       = 0,  /* Output to switch port. */
+    OFPAT_COPY_TTL_OUT = 11, /* Copy TTL "outwards" -- from next-to-outermost
+                                to outermost */
+    OFPAT_COPY_TTL_IN  = 12, /* Copy TTL "inwards" -- from outermost to
+                               next-to-outermost */
+    OFPAT_SET_MPLS_TTL = 15, /* MPLS TTL */
+    OFPAT_DEC_MPLS_TTL = 16, /* Decrement MPLS TTL */
+
+    OFPAT_PUSH_VLAN    = 17, /* Push a new VLAN tag */
+    OFPAT_POP_VLAN     = 18, /* Pop the outer VLAN tag */
+    OFPAT_PUSH_MPLS    = 19, /* Push a new MPLS tag */
+    OFPAT_POP_MPLS     = 20, /* Pop the outer MPLS tag */
+    OFPAT_SET_QUEUE    = 21, /* Set queue id when outputting to a port */
+    OFPAT_GROUP        = 22, /* Apply group. */
+    OFPAT_SET_NW_TTL   = 23, /* IP TTL. */
+    OFPAT_DEC_NW_TTL   = 24, /* Decrement IP TTL. */
+    OFPAT_SET_FIELD    = 25, /* Set a header field using OXM TLV format. */
+    OFPAT_PUSH_PBB     = 26, /* Push a new PBB service tag (I-TAG) */
+    OFPAT_POP_PBB      = 27, /* Pop the outer PBB service tag (I-TAG) */
+    OFPAT_EXPERIMENTER = 0xffff
+};
+
+/* Action header that is common to all actions.  The length includes the
+ * header and any padding used to make the action 64-bit aligned.
+ * NB: The length of an action *must* always be a multiple of eight. */
+struct ofp_action_header {
+    uint16_t type;                  /* One of OFPAT_*. */
+    uint16_t len;                   /* Length of action, including this
+                                       header.  This is the length of action,
+                                       including any padding to make it
+                                       64-bit aligned. */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_action_header) == 8);
+
+enum ofp_controller_max_len {
+	OFPCML_MAX       = 0xffe5, /* maximum max_len value which can be used
+	                              to request a specific byte length. */
+	OFPCML_NO_BUFFER = 0xffff  /* indicates that no buffering should be
+	                              applied and the whole packet is to be
+	                              sent to the controller. */
+};
+
+/* Action structure for OFPAT_OUTPUT, which sends packets out 'port'.
+ * When the 'port' is the OFPP_CONTROLLER, 'max_len' indicates the max
+ * number of bytes to send.  A 'max_len' of zero means no bytes of the
+ * packet should be sent. A 'max_len' of OFPCML_NO_BUFFER means that
+ * the packet is not buffered and the complete packet is to be sent to
+ * the controller. */
+struct ofp_action_output {
+    uint16_t type;                  /* OFPAT_OUTPUT. */
+    uint16_t len;                   /* Length is 16. */
+    uint32_t port;                  /* Output port. */
+    uint16_t max_len;               /* Max length to send to controller. */
+    uint8_t pad[6];                 /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_output) == 16);
+
+/* Action structure for OFPAT_SET_MPLS_TTL. */
+struct ofp_action_mpls_ttl {
+    uint16_t type;                  /* OFPAT_SET_MPLS_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t mpls_ttl;               /* MPLS TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_mpls_ttl) == 8);
+
+/* Action structure for OFPAT_PUSH_VLAN/MPLS/PBB. */
+struct ofp_action_push {
+    uint16_t type;                  /* OFPAT_PUSH_VLAN/MPLS/PBB. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_push) == 8);
+
+/* Action structure for OFPAT_POP_MPLS. */
+struct ofp_action_pop_mpls {
+    uint16_t type;                  /* OFPAT_POP_MPLS. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t ethertype;             /* Ethertype */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_pop_mpls) == 8);
+
+/* Action structure for OFPAT_GROUP. */
+struct ofp_action_group {
+    uint16_t type;                  /* OFPAT_GROUP. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t group_id;              /* Group identifier. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_group) == 8);
+
+/* Action structure for OFPAT_SET_NW_TTL. */
+struct ofp_action_nw_ttl {
+    uint16_t type;                  /* OFPAT_SET_NW_TTL. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_ttl;                 /* IP TTL */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_ttl) == 8);
+
+/* Action structure for OFPAT_SET_FIELD. */
+struct ofp_action_set_field {
+    uint16_t type;                  /* OFPAT_SET_FIELD. */
+    uint16_t len;                   /* Length is padded to 64 bits. */
+    /* Followed by:
+     *   - Exactly oxm_len bytes containing a single OXM TLV, then
+     *   - Exactly ((oxm_len + 4) + 7)/8*8 - (oxm_len + 4) (between 0 and 7)
+     *     bytes of all-zero bytes
+     */
+    uint8_t field[4];               /* OXM TLV - Make compiler happy */
+};
+OFP_ASSERT(sizeof(struct ofp_action_set_field) == 8);
+
+/* Action header for OFPAT_EXPERIMENTER.
+ * The rest of the body is experimenter-defined. */
+struct ofp_action_experimenter_header {
+    uint16_t type;                  /* OFPAT_EXPERIMENTER. */
+    uint16_t len;                   /* Length is a multiple of 8. */
+    uint32_t experimenter;          /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_experimenter_header) == 8);
+
+/* ## ---------------------- ## */
+/* ## OpenFlow Instructions. ## */
+/* ## ---------------------- ## */
+
+enum ofp_instruction_type {
+    OFPIT_GOTO_TABLE = 1,       /* Setup the next table in the lookup
+                                   pipeline */
+    OFPIT_WRITE_METADATA = 2,   /* Setup the metadata field for use later in
+                                   pipeline */
+    OFPIT_WRITE_ACTIONS = 3,    /* Write the action(s) onto the datapath action
+                                   set */
+    OFPIT_APPLY_ACTIONS = 4,    /* Applies the action(s) immediately */
+    OFPIT_CLEAR_ACTIONS = 5,    /* Clears all actions from the datapath
+                                   action set */
+    OFPIT_METER = 6,            /* Apply meter (rate limiter) */
+
+    OFPIT_EXPERIMENTER = 0xFFFF  /* Experimenter instruction */
+};
+
+/* Instruction header that is common to all instructions.  The length includes
+ * the header and any padding used to make the instruction 64-bit aligned.
+ * NB: The length of an instruction *must* always be a multiple of eight. */
+struct ofp_instruction {
+    uint16_t type;                /* Instruction type */
+    uint16_t len;                 /* Length of this struct in bytes. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction) == 4);
+
+/* Instruction structure for OFPIT_GOTO_TABLE */
+struct ofp_instruction_goto_table {
+    uint16_t type;                /* OFPIT_GOTO_TABLE */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t table_id;             /* Set next table in the lookup pipeline */
+    uint8_t pad[3];               /* Pad to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_goto_table) == 8);
+
+/* Instruction structure for OFPIT_WRITE_METADATA */
+struct ofp_instruction_write_metadata {
+    uint16_t type;                /* OFPIT_WRITE_METADATA */
+    uint16_t len;                 /* Length of this struct in bytes. */
+    uint8_t pad[4];               /* Align to 64-bits */
+    uint64_t metadata;            /* Metadata value to write */
+    uint64_t metadata_mask;       /* Metadata write bitmask */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_write_metadata) == 24);
+
+/* Instruction structure for OFPIT_WRITE/APPLY/CLEAR_ACTIONS */
+struct ofp_instruction_actions {
+    uint16_t type;              /* One of OFPIT_*_ACTIONS */
+    uint16_t len;               /* Length of this struct in bytes. */
+    uint8_t pad[4];             /* Align to 64-bits */
+    struct ofp_action_header actions[0];  /* Actions associated with
+                                             OFPIT_WRITE_ACTIONS and
+                                             OFPIT_APPLY_ACTIONS */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_actions) == 8);
+
+/* Instruction structure for OFPIT_METER */
+struct ofp_instruction_meter {
+    uint16_t type;                /* OFPIT_METER */
+    uint16_t len;                 /* Length is 8. */
+    uint32_t meter_id;            /* Meter instance. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_meter) == 8);
+
+/* Instruction structure for experimental instructions */
+struct ofp_instruction_experimenter {
+    uint16_t type;		/* OFPIT_EXPERIMENTER */
+    uint16_t len;               /* Length of this struct in bytes */
+    uint32_t experimenter;      /* Experimenter ID which takes the same form
+                                   as in struct ofp_experimenter_header. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_instruction_experimenter) == 8);
+
+/* ## --------------------------- ## */
+/* ## OpenFlow Flow Modification. ## */
+/* ## --------------------------- ## */
+
+enum ofp_flow_mod_command {
+    OFPFC_ADD           = 0, /* New flow. */
+    OFPFC_MODIFY        = 1, /* Modify all matching flows. */
+    OFPFC_MODIFY_STRICT = 2, /* Modify entry strictly matching wildcards and
+                                priority. */
+    OFPFC_DELETE        = 3, /* Delete all matching flows. */
+    OFPFC_DELETE_STRICT = 4, /* Delete entry strictly matching wildcards and
+                                priority. */
+};
+
+/* Value used in "idle_timeout" and "hard_timeout" to indicate that the entry
+ * is permanent. */
+#define OFP_FLOW_PERMANENT 0
+
+/* By default, choose a priority in the middle. */
+#define OFP_DEFAULT_PRIORITY 0x8000
+
+enum ofp_flow_mod_flags {
+    OFPFF_SEND_FLOW_REM = 1 << 0,  /* Send flow removed message when flow
+                                    * expires or is deleted. */
+    OFPFF_CHECK_OVERLAP = 1 << 1,  /* Check for overlapping entries first. */
+    OFPFF_RESET_COUNTS  = 1 << 2,  /* Reset flow packet and byte counts. */
+    OFPFF_NO_PKT_COUNTS = 1 << 3,  /* Don't keep track of packet count. */
+    OFPFF_NO_BYT_COUNTS = 1 << 4,  /* Don't keep track of byte count. */
+};
+
+/* Flow setup and teardown (controller -> datapath). */
+struct ofp_flow_mod {
+    struct ofp_header header;
+    uint64_t cookie;             /* Opaque controller-issued identifier. */
+    uint64_t cookie_mask;        /* Mask used to restrict the cookie bits
+                                    that must match when the command is
+                                    OFPFC_MODIFY* or OFPFC_DELETE*. A value
+                                    of 0 indicates no restriction. */
+
+    /* Flow actions. */
+    uint8_t table_id;             /* ID of the table to put the flow in.
+                                     For OFPFC_DELETE_* commands, OFPTT_ALL
+                                     can also be used to delete matching
+                                     flows from all tables. */
+    uint8_t command;              /* One of OFPFC_*. */
+    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
+    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
+    uint16_t priority;            /* Priority level of flow entry. */
+    uint32_t buffer_id;           /* Buffered packet to apply to, or
+                                     OFP_NO_BUFFER.
+                                     Not meaningful for OFPFC_DELETE*. */
+    uint32_t out_port;            /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output port.  A value of OFPP_ANY
+                                     indicates no restriction. */
+    uint32_t out_group;           /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output group.  A value of OFPG_ANY
+                                     indicates no restriction. */
+    uint16_t flags;               /* One of OFPFF_*. */
+    uint8_t pad[2];
+    struct ofp_match match;       /* Fields to match. Variable size. */
+    //struct ofp_instruction instructions[0]; /* Instruction set */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_mod) == 56);
+
+/* Group numbering. Groups can use any number up to OFPG_MAX. */
+enum ofp_group {
+    /* Last usable group number. */
+    OFPG_MAX        = 0xffffff00,
+
+    /* Fake groups. */
+    OFPG_ALL        = 0xfffffffc,  /* Represents all groups for group delete
+                                      commands. */
+    OFPG_ANY        = 0xffffffff   /* Wildcard group used only for flow stats
+                                      requests. Selects all flows regardless of
+                                      group (including flows with no group).
+                                      */
+};
+
+/* Group commands */
+enum ofp_group_mod_command {
+    OFPGC_ADD    = 0,       /* New group. */
+    OFPGC_MODIFY = 1,       /* Modify all matching groups. */
+    OFPGC_DELETE = 2,       /* Delete all matching groups. */
+};
+
+/* Bucket for use in groups. */
+struct ofp_bucket {
+    uint16_t len;                   /* Length the bucket in bytes, including
+                                       this header and any padding to make it
+                                       64-bit aligned. */
+    uint16_t weight;                /* Relative weight of bucket.  Only
+                                       defined for select groups. */
+    uint32_t watch_port;            /* Port whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint32_t watch_group;           /* Group whose state affects whether this
+                                       bucket is live.  Only required for fast
+                                       failover groups. */
+    uint8_t pad[4];
+    struct ofp_action_header actions[0]; /* The action length is inferred
+                                           from the length field in the
+                                           header. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket) == 16);
+
+/* Group setup and teardown (controller -> datapath). */
+struct ofp_group_mod {
+    struct ofp_header header;
+    uint16_t command;             /* One of OFPGC_*. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0]; /* The length of the bucket array is inferred
+                                     from the length field in the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_mod) == 16);
+
+/* Group types.  Values in the range [128, 255] are reserved for experimental
+ * use. */
+enum ofp_group_type {
+    OFPGT_ALL      = 0, /* All (multicast/broadcast) group.  */
+    OFPGT_SELECT   = 1, /* Select group. */
+    OFPGT_INDIRECT = 2, /* Indirect group. */
+    OFPGT_FF       = 3, /* Fast failover group. */
+};
+
+/* Special buffer-id to indicate 'no buffer' */
+#define OFP_NO_BUFFER 0xffffffff
+
+/* Send packet (controller -> datapath). */
+struct ofp_packet_out {
+    struct ofp_header header;
+    uint32_t buffer_id;           /* ID assigned by datapath (OFP_NO_BUFFER
+                                     if none). */
+    uint32_t in_port;             /* Packet's input port or OFPP_CONTROLLER. */
+    uint16_t actions_len;         /* Size of action array in bytes. */
+    uint8_t pad[6];
+    struct ofp_action_header actions[0]; /* Action list. */
+    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
+                                     from the length field in the header.
+                                     (Only meaningful if buffer_id == -1.) */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_out) == 24);
+
+/* Why is this packet being sent to the controller? */
+enum ofp_packet_in_reason {
+    OFPR_NO_MATCH    = 0,   /* No matching flow (table-miss flow entry). */
+    OFPR_ACTION      = 1,   /* Action explicitly output to controller. */
+    OFPR_INVALID_TTL = 2,   /* Packet has invalid TTL */
+};
+
+/* Packet received on port (datapath -> controller). */
+struct ofp_packet_in {
+    struct ofp_header header;
+    uint32_t buffer_id;     /* ID assigned by datapath. */
+    uint16_t total_len;     /* Full length of frame. */
+    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
+    uint8_t table_id;       /* ID of the table that was looked up */
+    uint64_t cookie;        /* Cookie of the flow entry that was looked up. */
+    struct ofp_match match; /* Packet metadata. Variable size. */
+    /* Followed by:
+     *   - Exactly 2 all-zero padding bytes, then
+     *   - An Ethernet frame whose length is inferred from header.length.
+     * The padding bytes preceding the Ethernet frame ensure that the IP
+     * header (if any) following the Ethernet header is 32-bit aligned.
+     */
+    //uint8_t pad[2];       /* Align to 64 bit + 16 bit */
+    //uint8_t data[0];      /* Ethernet frame */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_in) == 32);
+
+/* Why was this flow removed? */
+enum ofp_flow_removed_reason {
+    OFPRR_IDLE_TIMEOUT = 0,     /* Flow idle time exceeded idle_timeout. */
+    OFPRR_HARD_TIMEOUT = 1,     /* Time exceeded hard_timeout. */
+    OFPRR_DELETE       = 2,     /* Evicted by a DELETE flow mod. */
+    OFPRR_GROUP_DELETE = 3,     /* Group was removed. */
+};
+
+/* Flow removed (datapath -> controller). */
+struct ofp_flow_removed {
+    struct ofp_header header;
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+
+    uint16_t priority;        /* Priority level of flow entry. */
+    uint8_t reason;           /* One of OFPRR_*. */
+    uint8_t table_id;         /* ID of the table */
+
+    uint32_t duration_sec;    /* Time flow was alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow was alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t idle_timeout;    /* Idle timeout from original flow mod. */
+    uint16_t hard_timeout;    /* Hard timeout from original flow mod. */
+    uint64_t packet_count;
+    uint64_t byte_count;
+    struct ofp_match match;   /* Description of fields. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_removed) == 56);
+
+/* Meter numbering. Flow meters can use any number up to OFPM_MAX. */
+enum ofp_meter {
+    /* Last usable meter. */
+    OFPM_MAX        = 0xffff0000,
+
+    /* Virtual meters. */
+    OFPM_SLOWPATH   = 0xfffffffd,  /* Meter for slow datapath. */
+    OFPM_CONTROLLER = 0xfffffffe,  /* Meter for controller connection. */
+    OFPM_ALL        = 0xffffffff,  /* Represents all meters for stat requests
+                                      commands. */
+};
+
+/* Meter band types */
+enum ofp_meter_band_type {
+    OFPMBT_DROP            = 1,      /* Drop packet. */
+    OFPMBT_DSCP_REMARK     = 2,      /* Remark DSCP in the IP header. */
+    OFPMBT_EXPERIMENTER    = 0xFFFF  /* Experimenter meter band. */
+};
+
+/* Common header for all meter bands */
+struct ofp_meter_band_header {
+    uint16_t        type;    /* One of OFPMBT_*. */
+    uint16_t        len;     /* Length in bytes of this band. */
+    uint32_t        rate;    /* Rate for this band. */
+    uint32_t        burst_size; /* Size of bursts. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_band_header) == 12);
+
+/* OFPMBT_DROP band - drop packets */
+struct ofp_meter_band_drop {
+    uint16_t        type;    /* OFPMBT_DROP. */
+    uint16_t        len;     /* Length in bytes of this band. */
+    uint32_t        rate;    /* Rate for dropping packets. */
+    uint32_t        burst_size; /* Size of bursts. */
+    uint8_t         pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_meter_band_drop) == 16);
+
+/* OFPMBT_DSCP_REMARK band - Remark DSCP in the IP header */
+struct ofp_meter_band_dscp_remark {
+    uint16_t        type;    /* OFPMBT_DSCP_REMARK. */
+    uint16_t        len;     /* Length in bytes of this band. */
+    uint32_t        rate;    /* Rate for remarking packets. */
+    uint32_t        burst_size; /* Size of bursts. */
+    uint8_t         prec_level; /* Number of precendence level to substract. */
+    uint8_t         pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_meter_band_dscp_remark) == 16);
+
+/* OFPMBT_EXPERIMENTER band - Write actions in action set */
+struct ofp_meter_band_experimenter {
+    uint16_t        type;    /* One of OFPMBT_*. */
+    uint16_t        len;     /* Length in bytes of this band. */
+    uint32_t        rate;    /* Rate for this band. */
+    uint32_t        burst_size;   /* Size of bursts. */
+    uint32_t        experimenter; /* Experimenter ID which takes the same
+                                     form as in struct
+                                     ofp_experimenter_header. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_band_experimenter) == 16);
+
+/* Meter commands */
+enum ofp_meter_mod_command {
+    OFPMC_ADD,              /* New meter. */
+    OFPMC_MODIFY,           /* Modify specified meter. */
+    OFPMC_DELETE,           /* Delete specified meter. */
+};
+
+/* Meter configuration flags */
+enum ofp_meter_flags {
+    OFPMF_KBPS    = 1 << 0,     /* Rate value in kb/s (kilo-bit per second). */
+    OFPMF_PKTPS   = 1 << 1,     /* Rate value in packet/sec. */
+    OFPMF_BURST   = 1 << 2,     /* Do burst size. */
+    OFPMF_STATS   = 1 << 3,     /* Collect statistics. */
+};
+
+/* Meter configuration. OFPT_METER_MOD. */
+struct ofp_meter_mod {
+    struct ofp_header	header;
+    uint16_t            command;        /* One of OFPMC_*. */
+    uint16_t            flags;          /* One of OFPMF_*. */
+    uint32_t            meter_id;       /* Meter instance. */
+    struct ofp_meter_band_header bands[0]; /* The bands length is
+                                           inferred from the length field
+                                           in the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_mod) == 16);
+
+/* Values for 'type' in ofp_error_message.  These values are immutable: they
+ * will not change in future versions of the protocol (although new values may
+ * be added). */
+enum ofp_error_type {
+    OFPET_HELLO_FAILED         = 0,  /* Hello protocol failed. */
+    OFPET_BAD_REQUEST          = 1,  /* Request was not understood. */
+    OFPET_BAD_ACTION           = 2,  /* Error in action description. */
+    OFPET_BAD_INSTRUCTION      = 3,  /* Error in instruction list. */
+    OFPET_BAD_MATCH            = 4,  /* Error in match. */
+    OFPET_FLOW_MOD_FAILED      = 5,  /* Problem modifying flow entry. */
+    OFPET_GROUP_MOD_FAILED     = 6,  /* Problem modifying group entry. */
+    OFPET_PORT_MOD_FAILED      = 7,  /* Port mod request failed. */
+    OFPET_TABLE_MOD_FAILED     = 8,  /* Table mod request failed. */
+    OFPET_QUEUE_OP_FAILED      = 9,  /* Queue operation failed. */
+    OFPET_SWITCH_CONFIG_FAILED = 10, /* Switch config request failed. */
+    OFPET_ROLE_REQUEST_FAILED  = 11, /* Controller Role request failed. */
+    OFPET_METER_MOD_FAILED     = 12, /* Error in meter. */
+    OFPET_TABLE_FEATURES_FAILED = 13, /* Setting table features failed. */
+    OFPET_EXPERIMENTER = 0xffff      /* Experimenter error messages. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_HELLO_FAILED.  'data' contains an
+ * ASCII text string that may give failure details. */
+enum ofp_hello_failed_code {
+    OFPHFC_INCOMPATIBLE = 0,    /* No compatible version. */
+    OFPHFC_EPERM        = 1,    /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_REQUEST.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_request_code {
+    OFPBRC_BAD_VERSION      = 0,  /* ofp_header.version not supported. */
+    OFPBRC_BAD_TYPE         = 1,  /* ofp_header.type not supported. */
+    OFPBRC_BAD_MULTIPART    = 2,  /* ofp_multipart_request.type not supported. */
+    OFPBRC_BAD_EXPERIMENTER = 3,  /* Experimenter id not supported
+                                   * (in ofp_experimenter_header or
+                                   * ofp_multipart_request or
+                                   * ofp_multipart_reply). */
+    OFPBRC_BAD_EXP_TYPE     = 4,  /* Experimenter type not supported. */
+    OFPBRC_EPERM            = 5,  /* Permissions error. */
+    OFPBRC_BAD_LEN          = 6,  /* Wrong request length for type. */
+    OFPBRC_BUFFER_EMPTY     = 7,  /* Specified buffer has already been used. */
+    OFPBRC_BUFFER_UNKNOWN   = 8,  /* Specified buffer does not exist. */
+    OFPBRC_BAD_TABLE_ID     = 9,  /* Specified table-id invalid or does not
+                                   * exist. */
+    OFPBRC_IS_SLAVE         = 10, /* Denied because controller is slave. */
+    OFPBRC_BAD_PORT         = 11, /* Invalid port. */
+    OFPBRC_BAD_PACKET       = 12, /* Invalid packet in packet-out. */
+    OFPBRC_MULTIPART_BUFFER_OVERFLOW    = 13, /* ofp_multipart_request
+                                     overflowed the assigned buffer. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_ACTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_action_code {
+    OFPBAC_BAD_TYPE           = 0,  /* Unknown action type. */
+    OFPBAC_BAD_LEN            = 1,  /* Length problem in actions. */
+    OFPBAC_BAD_EXPERIMENTER   = 2,  /* Unknown experimenter id specified. */
+    OFPBAC_BAD_EXP_TYPE       = 3,  /* Unknown action for experimenter id. */
+    OFPBAC_BAD_OUT_PORT       = 4,  /* Problem validating output port. */
+    OFPBAC_BAD_ARGUMENT       = 5,  /* Bad action argument. */
+    OFPBAC_EPERM              = 6,  /* Permissions error. */
+    OFPBAC_TOO_MANY           = 7,  /* Can't handle this many actions. */
+    OFPBAC_BAD_QUEUE          = 8,  /* Problem validating output queue. */
+    OFPBAC_BAD_OUT_GROUP      = 9,  /* Invalid group id in forward action. */
+    OFPBAC_MATCH_INCONSISTENT = 10, /* Action can't apply for this match,
+                                       or Set-Field missing prerequisite. */
+    OFPBAC_UNSUPPORTED_ORDER  = 11, /* Action order is unsupported for the
+                                 action list in an Apply-Actions instruction */
+    OFPBAC_BAD_TAG            = 12, /* Actions uses an unsupported
+                                       tag/encap. */
+    OFPBAC_BAD_SET_TYPE       = 13, /* Unsupported type in SET_FIELD action. */
+    OFPBAC_BAD_SET_LEN        = 14, /* Length problem in SET_FIELD action. */
+    OFPBAC_BAD_SET_ARGUMENT   = 15, /* Bad argument in SET_FIELD action. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_INSTRUCTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_instruction_code {
+    OFPBIC_UNKNOWN_INST     = 0, /* Unknown instruction. */
+    OFPBIC_UNSUP_INST       = 1, /* Switch or table does not support the
+                                    instruction. */
+    OFPBIC_BAD_TABLE_ID     = 2, /* Invalid Table-ID specified. */
+    OFPBIC_UNSUP_METADATA   = 3, /* Metadata value unsupported by datapath. */
+    OFPBIC_UNSUP_METADATA_MASK = 4, /* Metadata mask value unsupported by
+                                       datapath. */
+    OFPBIC_BAD_EXPERIMENTER = 5, /* Unknown experimenter id specified. */
+    OFPBIC_BAD_EXP_TYPE     = 6, /* Unknown instruction for experimenter id. */
+    OFPBIC_BAD_LEN          = 7, /* Length problem in instructions. */
+    OFPBIC_EPERM            = 8, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_MATCH.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_match_code {
+    OFPBMC_BAD_TYPE         = 0,  /* Unsupported match type specified by the
+                                     match */
+    OFPBMC_BAD_LEN          = 1,  /* Length problem in match. */
+    OFPBMC_BAD_TAG          = 2,  /* Match uses an unsupported tag/encap. */
+    OFPBMC_BAD_DL_ADDR_MASK = 3,  /* Unsupported datalink addr mask - switch
+                                     does not support arbitrary datalink
+                                     address mask. */
+    OFPBMC_BAD_NW_ADDR_MASK = 4,  /* Unsupported network addr mask - switch
+                                     does not support arbitrary network
+                                     address mask. */
+    OFPBMC_BAD_WILDCARDS    = 5,  /* Unsupported combination of fields masked
+                                     or omitted in the match. */
+    OFPBMC_BAD_FIELD        = 6,  /* Unsupported field type in the match. */
+    OFPBMC_BAD_VALUE        = 7,  /* Unsupported value in a match field. */
+    OFPBMC_BAD_MASK         = 8,  /* Unsupported mask specified in the match,
+                                     field is not dl-address or nw-address. */
+    OFPBMC_BAD_PREREQ       = 9,  /* A prerequisite was not met. */
+    OFPBMC_DUP_FIELD        = 10, /* A field type was duplicated. */
+    OFPBMC_EPERM            = 11, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_FLOW_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_flow_mod_failed_code {
+    OFPFMFC_UNKNOWN      = 0,   /* Unspecified error. */
+    OFPFMFC_TABLE_FULL   = 1,   /* Flow not added because table was full. */
+    OFPFMFC_BAD_TABLE_ID = 2,   /* Table does not exist */
+    OFPFMFC_OVERLAP      = 3,   /* Attempted to add overlapping flow with
+                                   CHECK_OVERLAP flag set. */
+    OFPFMFC_EPERM        = 4,   /* Permissions error. */
+    OFPFMFC_BAD_TIMEOUT  = 5,   /* Flow not added because of unsupported
+                                   idle/hard timeout. */
+    OFPFMFC_BAD_COMMAND  = 6,   /* Unsupported or unknown command. */
+    OFPFMFC_BAD_FLAGS    = 7,   /* Unsupported or unknown flags. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_GROUP_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_group_mod_failed_code {
+    OFPGMFC_GROUP_EXISTS         = 0,  /* Group not added because a group ADD
+                                          attempted to replace an
+                                          already-present group. */
+    OFPGMFC_INVALID_GROUP        = 1,  /* Group not added because Group
+                                          specified is invalid. */
+    OFPGMFC_WEIGHT_UNSUPPORTED   = 2,  /* Switch does not support unequal load
+                                          sharing with select groups. */
+    OFPGMFC_OUT_OF_GROUPS        = 3,  /* The group table is full. */
+    OFPGMFC_OUT_OF_BUCKETS       = 4,  /* The maximum number of action buckets
+                                          for a group has been exceeded. */
+    OFPGMFC_CHAINING_UNSUPPORTED = 5,  /* Switch does not support groups that
+                                          forward to groups. */
+    OFPGMFC_WATCH_UNSUPPORTED    = 6,  /* This group cannot watch the watch_port
+                                          or watch_group specified. */
+    OFPGMFC_LOOP                 = 7,  /* Group entry would cause a loop. */
+    OFPGMFC_UNKNOWN_GROUP        = 8,  /* Group not modified because a group
+                                          MODIFY attempted to modify a
+                                          non-existent group. */
+    OFPGMFC_CHAINED_GROUP        = 9,  /* Group not deleted because another
+                                          group is forwarding to it. */
+    OFPGMFC_BAD_TYPE             = 10, /* Unsupported or unknown group type. */
+    OFPGMFC_BAD_COMMAND          = 11, /* Unsupported or unknown command. */
+    OFPGMFC_BAD_BUCKET           = 12, /* Error in bucket. */
+    OFPGMFC_BAD_WATCH            = 13, /* Error in watch port/group. */
+    OFPGMFC_EPERM                = 14, /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_PORT_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_port_mod_failed_code {
+    OFPPMFC_BAD_PORT      = 0,   /* Specified port number does not exist. */
+    OFPPMFC_BAD_HW_ADDR   = 1,   /* Specified hardware address does not
+                                  * match the port number. */
+    OFPPMFC_BAD_CONFIG    = 2,   /* Specified config is invalid. */
+    OFPPMFC_BAD_ADVERTISE = 3,   /* Specified advertise is invalid. */
+    OFPPMFC_EPERM         = 4,   /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_TABLE_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_table_mod_failed_code {
+    OFPTMFC_BAD_TABLE  = 0,      /* Specified table does not exist. */
+    OFPTMFC_BAD_CONFIG = 1,      /* Specified config is invalid. */
+    OFPTMFC_EPERM      = 2,      /* Permissions error. */
+};
+
+/* ofp_error msg 'code' values for OFPET_QUEUE_OP_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request */
+enum ofp_queue_op_failed_code {
+    OFPQOFC_BAD_PORT   = 0,     /* Invalid port (or port does not exist). */
+    OFPQOFC_BAD_QUEUE  = 1,     /* Queue does not exist. */
+    OFPQOFC_EPERM      = 2,     /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_SWITCH_CONFIG_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_switch_config_failed_code {
+    OFPSCFC_BAD_FLAGS  = 0,      /* Specified flags is invalid. */
+    OFPSCFC_BAD_LEN    = 1,      /* Specified len is invalid. */
+    OFPSCFC_EPERM      = 2,      /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_ROLE_REQUEST_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_role_request_failed_code {
+    OFPRRFC_STALE      = 0,      /* Stale Message: old generation_id. */
+    OFPRRFC_UNSUP      = 1,      /* Controller role change unsupported. */
+    OFPRRFC_BAD_ROLE   = 2,      /* Invalid role. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_METER_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_meter_mod_failed_code {
+    OFPMMFC_UNKNOWN       = 0,  /* Unspecified error. */
+    OFPMMFC_METER_EXISTS  = 1,  /* Meter not added because a Meter ADD
+                                 * attempted to replace an existing Meter. */
+    OFPMMFC_INVALID_METER = 2,      /* Meter not added because Meter specified
+                                 * is invalid. */
+    OFPMMFC_UNKNOWN_METER = 3,  /* Meter not modified because a Meter
+                                   MODIFY attempted to modify a non-existent
+                                   Meter. */
+    OFPMMFC_BAD_COMMAND   = 4,  /* Unsupported or unknown command. */
+    OFPMMFC_BAD_FLAGS     = 5,  /* Flag configuration unsupported. */
+    OFPMMFC_BAD_RATE      = 6,  /* Rate unsupported. */
+    OFPMMFC_BAD_BURST     = 7,  /* Burst size unsupported. */
+    OFPMMFC_BAD_BAND      = 8,  /* Band unsupported. */
+    OFPMMFC_BAD_BAND_VALUE = 9, /* Band value unsupported. */
+    OFPMMFC_OUT_OF_METERS = 10, /* No more meters available. */
+    OFPMMFC_OUT_OF_BANDS  = 11, /* The maximum number of properties
+                                 * for a meter has been exceeded. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_TABLE_FEATURES_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_table_features_failed_code {
+    OFPTFFC_BAD_TABLE    = 0,      /* Specified table does not exist. */
+    OFPTFFC_BAD_METADATA = 1,      /* Invalid metadata mask. */
+    OFPTFFC_BAD_TYPE     = 2,      /* Unknown property type. */
+    OFPTFFC_BAD_LEN      = 3,      /* Length problem in properties. */
+    OFPTFFC_BAD_ARGUMENT = 4,      /* Unsupported property value. */
+    OFPTFFC_EPERM        = 5,      /* Permissions error. */
+};
+
+/* OFPT_ERROR: Error message (datapath -> controller). */
+struct ofp_error_msg {
+    struct ofp_header header;
+
+    uint16_t type;
+    uint16_t code;
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code.  No padding. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_msg) == 12);
+
+/* OFPET_EXPERIMENTER: Error message (datapath -> controller). */
+struct ofp_error_experimenter_msg {
+    struct ofp_header header;
+
+    uint16_t type;            /* OFPET_EXPERIMENTER. */
+    uint16_t exp_type;        /* Experimenter defined. */
+    uint32_t experimenter;    /* Experimenter ID which takes the same form
+                                 as in struct ofp_experimenter_header. */
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code.  No padding. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_experimenter_msg) == 16);
+
+enum ofp_multipart_types {
+    /* Description of this OpenFlow switch.
+     * The request body is empty.
+     * The reply body is struct ofp_desc. */
+    OFPMP_DESC = 0,
+
+    /* Individual flow statistics.
+     * The request body is struct ofp_flow_stats_request.
+     * The reply body is an array of struct ofp_flow_stats. */
+    OFPMP_FLOW = 1,
+
+    /* Aggregate flow statistics.
+     * The request body is struct ofp_aggregate_stats_request.
+     * The reply body is struct ofp_aggregate_stats_reply. */
+    OFPMP_AGGREGATE = 2,
+
+    /* Flow table statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_table_stats. */
+    OFPMP_TABLE = 3,
+
+    /* Port statistics.
+     * The request body is struct ofp_port_stats_request.
+     * The reply body is an array of struct ofp_port_stats. */
+    OFPMP_PORT_STATS = 4,
+
+    /* Queue statistics for a port
+     * The request body is struct ofp_queue_stats_request.
+     * The reply body is an array of struct ofp_queue_stats */
+    OFPMP_QUEUE = 5,
+
+    /* Group counter statistics.
+     * The request body is struct ofp_group_stats_request.
+     * The reply is an array of struct ofp_group_stats. */
+    OFPMP_GROUP = 6,
+
+    /* Group description.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_group_desc_stats. */
+    OFPMP_GROUP_DESC = 7,
+
+    /* Group features.
+     * The request body is empty.
+     * The reply body is struct ofp_group_features. */
+    OFPMP_GROUP_FEATURES = 8,
+
+    /* Meter statistics.
+     * The request body is struct ofp_meter_multipart_requests.
+     * The reply body is an array of struct ofp_meter_stats. */
+    OFPMP_METER = 9,
+
+    /* Meter configuration.
+     * The request body is struct ofp_meter_multipart_requests.
+     * The reply body is an array of struct ofp_meter_config. */
+    OFPMP_METER_CONFIG = 10,
+
+    /* Meter features.
+     * The request body is empty.
+     * The reply body is struct ofp_meter_features. */
+    OFPMP_METER_FEATURES = 11,
+
+    /* Table features.
+     * The request body is either empty or contains an array of
+     * struct ofp_table_features containing the controller's
+     * desired view of the switch. If the switch is unable to
+     * set the specified view an error is returned.
+     * The reply body is an array of struct ofp_table_features. */
+    OFPMP_TABLE_FEATURES = 12,
+
+    /* Port description.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_port. */
+    OFPMP_PORT_DESC = 13,
+
+    /* Experimenter extension.
+     * The request and reply bodies begin with
+     * struct ofp_experimenter_multipart_header.
+     * The request and reply bodies are otherwise experimenter-defined. */
+    OFPMP_EXPERIMENTER = 0xffff
+};
+
+enum ofp_multipart_request_flags {
+    OFPMPF_REQ_MORE  = 1 << 0  /* More requests to follow. */
+};
+
+struct ofp_multipart_request {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPMP_* constants. */
+    uint16_t flags;             /* OFPMPF_REQ_* flags. */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the request. */
+};
+OFP_ASSERT(sizeof(struct ofp_multipart_request) == 16);
+
+enum ofp_multipart_reply_flags {
+    OFPMPF_REPLY_MORE  = 1 << 0  /* More replies to follow. */
+};
+
+struct ofp_multipart_reply {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPMP_* constants. */
+    uint16_t flags;             /* OFPMPF_REPLY_* flags. */
+    uint8_t pad[4];
+    uint8_t body[0];            /* Body of the reply. */
+};
+OFP_ASSERT(sizeof(struct ofp_multipart_reply) == 16);
+
+#define DESC_STR_LEN   256
+#define SERIAL_NUM_LEN 32
+/* Body of reply to OFPMP_DESC request.  Each entry is a NULL-terminated
+ * ASCII string. */
+struct ofp_desc {
+    char mfr_desc[DESC_STR_LEN];       /* Manufacturer description. */
+    char hw_desc[DESC_STR_LEN];        /* Hardware description. */
+    char sw_desc[DESC_STR_LEN];        /* Software description. */
+    char serial_num[SERIAL_NUM_LEN];   /* Serial number. */
+    char dp_desc[DESC_STR_LEN];        /* Human readable description of datapath. */
+};
+OFP_ASSERT(sizeof(struct ofp_desc) == 1056);
+
+/* Body for ofp_multipart_request of type OFPMP_FLOW. */
+struct ofp_flow_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats),
+                                 OFPTT_ALL for all tables. */
+    uint8_t pad[3];           /* Align to 32 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats_request) == 40);
+
+/* Body of reply to OFPMP_FLOW request. */
+struct ofp_flow_stats {
+    uint16_t length;          /* Length of this entry. */
+    uint8_t table_id;         /* ID of table flow came from. */
+    uint8_t pad;
+    uint32_t duration_sec;    /* Time flow has been alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow has been alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t priority;        /* Priority of the entry. */
+    uint16_t idle_timeout;    /* Number of seconds idle before expiration. */
+    uint16_t hard_timeout;    /* Number of seconds before expiration. */
+    uint16_t flags;           /* One of OFPFF_*. */
+    uint8_t pad2[4];          /* Align to 64-bits. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+    uint64_t packet_count;    /* Number of packets in flow. */
+    uint64_t byte_count;      /* Number of bytes in flow. */
+    struct ofp_match match;   /* Description of fields. Variable size. */
+    //struct ofp_instruction instructions[0]; /* Instruction set. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats) == 56);
+
+/* Body for ofp_multipart_request of type OFPMP_AGGREGATE. */
+struct ofp_aggregate_stats_request {
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats)
+                                 OFPTT_ALL for all tables. */
+    uint8_t pad[3];           /* Align to 32 bits. */
+    uint32_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_ANY
+                                 indicates no restriction. */
+    uint32_t out_group;       /* Require matching entries to include this
+                                 as an output group.  A value of OFPG_ANY
+                                 indicates no restriction. */
+    uint8_t pad2[4];          /* Align to 64 bits. */
+    uint64_t cookie;          /* Require matching entries to contain this
+                                 cookie value */
+    uint64_t cookie_mask;     /* Mask used to restrict the cookie bits that
+                                 must match. A value of 0 indicates
+                                 no restriction. */
+    struct ofp_match match;   /* Fields to match. Variable size. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_request) == 40);
+
+/* Body of reply to OFPMP_AGGREGATE request. */
+struct ofp_aggregate_stats_reply {
+    uint64_t packet_count;    /* Number of packets in flows. */
+    uint64_t byte_count;      /* Number of bytes in flows. */
+    uint32_t flow_count;      /* Number of flows. */
+    uint8_t pad[4];           /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_reply) == 24);
+
+/* Table Feature property types.
+ * Low order bit cleared indicates a property for a regular Flow Entry.
+ * Low order bit set indicates a property for the Table-Miss Flow Entry.
+ */
+enum ofp_table_feature_prop_type {
+    OFPTFPT_INSTRUCTIONS           = 0,  /* Instructions property. */
+    OFPTFPT_INSTRUCTIONS_MISS      = 1,  /* Instructions for table-miss. */
+    OFPTFPT_NEXT_TABLES            = 2,  /* Next Table property. */
+    OFPTFPT_NEXT_TABLES_MISS       = 3,  /* Next Table for table-miss. */
+    OFPTFPT_WRITE_ACTIONS          = 4,  /* Write Actions property. */
+    OFPTFPT_WRITE_ACTIONS_MISS     = 5,  /* Write Actions for table-miss. */
+    OFPTFPT_APPLY_ACTIONS          = 6,  /* Apply Actions property. */
+    OFPTFPT_APPLY_ACTIONS_MISS     = 7,  /* Apply Actions for table-miss. */
+    OFPTFPT_MATCH                  = 8,  /* Match property. */
+    OFPTFPT_WILDCARDS              = 10, /* Wildcards property. */
+    OFPTFPT_WRITE_SETFIELD         = 12, /* Write Set-Field property. */
+    OFPTFPT_WRITE_SETFIELD_MISS    = 13, /* Write Set-Field for table-miss. */
+    OFPTFPT_APPLY_SETFIELD         = 14, /* Apply Set-Field property. */
+    OFPTFPT_APPLY_SETFIELD_MISS    = 15, /* Apply Set-Field for table-miss. */
+    OFPTFPT_EXPERIMENTER           = 0xFFFE, /* Experimenter property. */
+    OFPTFPT_EXPERIMENTER_MISS      = 0xFFFF, /* Experimenter for table-miss. */
+};
+
+/* Common header for all Table Feature Properties */
+struct ofp_table_feature_prop_header {
+    uint16_t         type;    /* One of OFPTFPT_*. */
+    uint16_t         length;  /* Length in bytes of this property. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_header) == 4);
+
+/* Instructions property */
+struct ofp_table_feature_prop_instructions {
+    uint16_t         type;    /* One of OFPTFPT_INSTRUCTIONS,
+                                 OFPTFPT_INSTRUCTIONS_MISS. */
+    uint16_t         length;  /* Length in bytes of this property. */
+    /* Followed by:
+     *   - Exactly (length - 4) bytes containing the instruction ids, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    struct ofp_instruction   instruction_ids[0];   /* List of instructions */
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_instructions) == 4);
+
+/* Next Tables property */
+struct ofp_table_feature_prop_next_tables {
+    uint16_t         type;    /* One of OFPTFPT_NEXT_TABLES,
+                                 OFPTFPT_NEXT_TABLES_MISS. */
+    uint16_t         length;  /* Length in bytes of this property. */
+    /* Followed by:
+     *   - Exactly (length - 4) bytes containing the table_ids, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    uint8_t          next_table_ids[0];
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_next_tables) == 4);
+
+/* Actions property */
+struct ofp_table_feature_prop_actions {
+    uint16_t         type;    /* One of OFPTFPT_WRITE_ACTIONS,
+                                 OFPTFPT_WRITE_ACTIONS_MISS,
+                                 OFPTFPT_APPLY_ACTIONS,
+                                 OFPTFPT_APPLY_ACTIONS_MISS. */
+    uint16_t         length;  /* Length in bytes of this property. */
+    /* Followed by:
+     *   - Exactly (length - 4) bytes containing the action_ids, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    struct ofp_action_header  action_ids[0];      /* List of actions */
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_actions) == 4);
+
+/* Match, Wildcard or Set-Field property */
+struct ofp_table_feature_prop_oxm {
+    uint16_t         type;    /* One of OFPTFPT_MATCH,
+                                 OFPTFPT_WILDCARDS,
+                                 OFPTFPT_WRITE_SETFIELD,
+                                 OFPTFPT_WRITE_SETFIELD_MISS,
+                                 OFPTFPT_APPLY_SETFIELD,
+                                 OFPTFPT_APPLY_SETFIELD_MISS. */
+    uint16_t         length;  /* Length in bytes of this property. */
+    /* Followed by:
+     *   - Exactly (length - 4) bytes containing the oxm_ids, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    uint32_t         oxm_ids[0];   /* Array of OXM headers */
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_oxm) == 4);
+
+/* Experimenter table feature property */
+struct ofp_table_feature_prop_experimenter {
+    uint16_t         type;    /* One of OFPTFPT_EXPERIMENTER,
+                                 OFPTFPT_EXPERIMENTER_MISS. */
+    uint16_t         length;  /* Length in bytes of this property. */
+    uint32_t         experimenter;  /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+    uint32_t         exp_type;      /* Experimenter defined. */
+    /* Followed by:
+     *   - Exactly (length - 12) bytes containing the experimenter data, then
+     *   - Exactly (length + 7)/8*8 - (length) (between 0 and 7)
+     *     bytes of all-zero bytes */
+    uint32_t         experimenter_data[0];
+};
+OFP_ASSERT(sizeof(struct ofp_table_feature_prop_experimenter) == 12);
+
+/* Body for ofp_multipart_request of type OFPMP_TABLE_FEATURES./
+ * Body of reply to OFPMP_TABLE_FEATURES request. */
+struct ofp_table_features {
+    uint16_t length;         /* Length is padded to 64 bits. */
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[5];          /* Align to 64-bits. */
+    char name[OFP_MAX_TABLE_NAME_LEN];
+    uint64_t metadata_match; /* Bits of metadata table can match. */
+    uint64_t metadata_write; /* Bits of metadata table can write. */
+    uint32_t config;         /* Bitmap of OFPTC_* values */
+    uint32_t max_entries;    /* Max number of entries supported. */
+
+    /* Table Feature Property list */
+    struct ofp_table_feature_prop_header properties[0];
+};
+OFP_ASSERT(sizeof(struct ofp_table_features) == 64);
+
+/* Body of reply to OFPMP_TABLE request. */
+struct ofp_table_stats {
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[3];          /* Align to 32-bits. */
+    uint32_t active_count;   /* Number of active entries. */
+    uint64_t lookup_count;   /* Number of packets looked up in table. */
+    uint64_t matched_count;  /* Number of packets that hit table. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_stats) == 24);
+
+/* Body for ofp_multipart_request of type OFPMP_PORT. */
+struct ofp_port_stats_request {
+    uint32_t port_no;        /* OFPMP_PORT message must request statistics
+                              * either for a single port (specified in
+                              * port_no) or for all ports (if port_no ==
+                              * OFPP_ANY). */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats_request) == 8);
+
+/* Body of reply to OFPMP_PORT request. If a counter is unsupported, set
+ * the field to all ones. */
+struct ofp_port_stats {
+    uint32_t port_no;
+    uint8_t pad[4];          /* Align to 64-bits. */
+    uint64_t rx_packets;     /* Number of received packets. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t rx_bytes;       /* Number of received bytes. */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t rx_dropped;     /* Number of packets dropped by RX. */
+    uint64_t tx_dropped;     /* Number of packets dropped by TX. */
+    uint64_t rx_errors;      /* Number of receive errors.  This is a super-set
+                                of more specific receive errors and should be
+                                greater than or equal to the sum of all
+                                rx_*_err values. */
+    uint64_t tx_errors;      /* Number of transmit errors.  This is a super-set
+                                of more specific transmit errors and should be
+                                greater than or equal to the sum of all
+                                tx_*_err values (none currently defined.) */
+    uint64_t rx_frame_err;   /* Number of frame alignment errors. */
+    uint64_t rx_over_err;    /* Number of packets with RX overrun. */
+    uint64_t rx_crc_err;     /* Number of CRC errors. */
+    uint64_t collisions;     /* Number of collisions. */
+    uint32_t duration_sec;   /* Time port has been alive in seconds. */
+    uint32_t duration_nsec;  /* Time port has been alive in nanoseconds beyond
+                                duration_sec. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats) == 112);
+
+/* Body of OFPMP_GROUP request. */
+struct ofp_group_stats_request {
+    uint32_t group_id;       /* All groups if OFPG_ALL. */
+    uint8_t pad[4];          /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats_request) == 8);
+
+/* Used in group stats replies. */
+struct ofp_bucket_counter {
+    uint64_t packet_count;   /* Number of packets processed by bucket. */
+    uint64_t byte_count;     /* Number of bytes processed by bucket. */
+};
+OFP_ASSERT(sizeof(struct ofp_bucket_counter) == 16);
+
+/* Body of reply to OFPMP_GROUP request. */
+struct ofp_group_stats {
+    uint16_t length;         /* Length of this entry. */
+    uint8_t pad[2];          /* Align to 64 bits. */
+    uint32_t group_id;       /* Group identifier. */
+    uint32_t ref_count;      /* Number of flows or groups that directly forward
+                                to this group. */
+    uint8_t pad2[4];         /* Align to 64 bits. */
+    uint64_t packet_count;   /* Number of packets processed by group. */
+    uint64_t byte_count;     /* Number of bytes processed by group. */
+    uint32_t duration_sec;   /* Time group has been alive in seconds. */
+    uint32_t duration_nsec;  /* Time group has been alive in nanoseconds beyond
+                                duration_sec. */
+    struct ofp_bucket_counter bucket_stats[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_stats) == 40);
+
+/* Body of reply to OFPMP_GROUP_DESC request. */
+struct ofp_group_desc_stats {
+    uint16_t length;              /* Length of this entry. */
+    uint8_t type;                 /* One of OFPGT_*. */
+    uint8_t pad;                  /* Pad to 64 bits. */
+    uint32_t group_id;            /* Group identifier. */
+    struct ofp_bucket buckets[0];
+};
+OFP_ASSERT(sizeof(struct ofp_group_desc_stats) == 8);
+
+/* Group configuration flags */
+enum ofp_group_capabilities {
+    OFPGFC_SELECT_WEIGHT   = 1 << 0,  /* Support weight for select groups */
+    OFPGFC_SELECT_LIVENESS = 1 << 1,  /* Support liveness for select groups */
+    OFPGFC_CHAINING        = 1 << 2,  /* Support chaining groups */
+    OFPGFC_CHAINING_CHECKS = 1 << 3,  /* Check chaining for loops and delete */
+};
+
+/* Body of reply to OFPMP_GROUP_FEATURES request. Group features. */
+struct ofp_group_features {
+    uint32_t  types;           /* Bitmap of OFPGT_* values supported. */
+    uint32_t  capabilities;    /* Bitmap of OFPGFC_* capability supported. */
+    uint32_t  max_groups[4];   /* Maximum number of groups for each type. */
+    uint32_t  actions[4];      /* Bitmaps of OFPAT_* that are supported. */
+};
+OFP_ASSERT(sizeof(struct ofp_group_features) == 40);
+
+/* Body of OFPMP_METER and OFPMP_METER_CONFIG requests. */
+struct ofp_meter_multipart_request {
+    uint32_t meter_id;       /* Meter instance, or OFPM_ALL. */
+    uint8_t pad[4];          /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_multipart_request) == 8);
+
+/* Statistics for each meter band */
+struct ofp_meter_band_stats {
+    uint64_t        packet_band_count;   /* Number of packets in band. */
+    uint64_t        byte_band_count;     /* Number of bytes in band. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_band_stats) == 16);
+
+/* Body of reply to OFPMP_METER request. Meter statistics. */
+struct ofp_meter_stats {
+    uint32_t        meter_id;         /* Meter instance. */
+    uint16_t        len;              /* Length in bytes of this stats. */
+    uint8_t         pad[6];
+    uint32_t        flow_count;       /* Number of flows bound to meter. */
+    uint64_t        packet_in_count;  /* Number of packets in input. */
+    uint64_t        byte_in_count;    /* Number of bytes in input. */
+    uint32_t   duration_sec;  /* Time meter has been alive in seconds. */
+    uint32_t   duration_nsec; /* Time meter has been alive in nanoseconds beyond
+                                 duration_sec. */
+    struct ofp_meter_band_stats band_stats[0]; /* The band_stats length is
+                                         inferred from the length field. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_stats) == 40);
+
+/* Body of reply to OFPMP_METER_CONFIG request. Meter configuration. */
+struct ofp_meter_config {
+    uint16_t        length;           /* Length of this entry. */
+    uint16_t        flags;            /* All OFPMC_* that apply. */
+    uint32_t        meter_id;         /* Meter instance. */
+    struct ofp_meter_band_header bands[0]; /* The bands length is
+                                         inferred from the length field. */
+};
+OFP_ASSERT(sizeof(struct ofp_meter_config) == 8);
+
+/* Body of reply to OFPMP_METER_FEATURES request. Meter features. */
+struct ofp_meter_features {
+    uint32_t    max_meter;    /* Maximum number of meters. */
+    uint32_t    band_types;   /* Bitmaps of OFPMBT_* values supported. */
+    uint32_t    capabilities; /* Bitmaps of "ofp_meter_flags". */
+    uint8_t     max_bands;    /* Maximum bands per meters */
+    uint8_t     max_color;    /* Maximum color value */
+    uint8_t     pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_meter_features) == 16);
+
+/* Body for ofp_multipart_request/reply of type OFPMP_EXPERIMENTER. */
+struct ofp_experimenter_multipart_header {
+    uint32_t experimenter;    /* Experimenter ID which takes the same form
+                                 as in struct ofp_experimenter_header. */
+    uint32_t exp_type;        /* Experimenter defined. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_experimenter_multipart_header) == 8);
+
+/* Experimenter extension. */
+struct ofp_experimenter_header {
+    struct ofp_header header;   /* Type OFPT_EXPERIMENTER. */
+    uint32_t experimenter;      /* Experimenter ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by ONF. */
+    uint32_t exp_type;          /* Experimenter defined. */
+    /* Experimenter-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_experimenter_header) == 16);
+
+/* All ones is used to indicate all queues in a port (for stats retrieval). */
+#define OFPQ_ALL      0xffffffff
+
+/* Min rate > 1000 means not configured. */
+#define OFPQ_MIN_RATE_UNCFG      0xffff
+
+/* Max rate > 1000 means not configured. */
+#define OFPQ_MAX_RATE_UNCFG      0xffff
+
+enum ofp_queue_properties {
+    OFPQT_MIN_RATE      = 1,      /* Minimum datarate guaranteed. */
+    OFPQT_MAX_RATE      = 2,      /* Maximum datarate. */
+    OFPQT_EXPERIMENTER  = 0xffff  /* Experimenter defined property. */
+};
+
+/* Common description for a queue. */
+struct ofp_queue_prop_header {
+    uint16_t property;    /* One of OFPQT_. */
+    uint16_t len;         /* Length of property, including this header. */
+    uint8_t pad[4];       /* 64-bit alignemnt. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_header) == 8);
+
+/* Min-Rate queue property description. */
+struct ofp_queue_prop_min_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MIN, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_min_rate) == 16);
+
+/* Max-Rate queue property description. */
+struct ofp_queue_prop_max_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MAX, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_max_rate) == 16);
+
+/* Experimenter queue property description. */
+struct ofp_queue_prop_experimenter {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_EXPERIMENTER, len: 16. */
+    uint32_t experimenter;          /* Experimenter ID which takes the same
+                                       form as in struct
+                                       ofp_experimenter_header. */
+    uint8_t pad[4];       /* 64-bit alignment */
+    uint8_t data[0];      /* Experimenter defined data. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_experimenter) == 16);
+
+/* Full description for a queue. */
+struct ofp_packet_queue {
+    uint32_t queue_id;     /* id for the specific queue. */
+    uint32_t port;         /* Port this queue is attached to. */
+    uint16_t len;          /* Length in bytes of this queue desc. */
+    uint8_t pad[6];        /* 64-bit alignment. */
+    struct ofp_queue_prop_header properties[0]; /* List of properties. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_queue) == 16);
+
+/* Query for port queue configuration. */
+struct ofp_queue_get_config_request {
+    struct ofp_header header;
+    uint32_t port;         /* Port to be queried. Should refer
+                              to a valid physical port (i.e. < OFPP_MAX),
+                              or OFPP_ANY to request all configured
+                              queues.*/
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_request) == 16);
+
+/* Queue configuration for a given port. */
+struct ofp_queue_get_config_reply {
+    struct ofp_header header;
+    uint32_t port;
+    uint8_t pad[4];
+    struct ofp_packet_queue queues[0]; /* List of configured queues. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_reply) == 16);
+
+/* OFPAT_SET_QUEUE action struct: send packets to given queue on port. */
+struct ofp_action_set_queue {
+    uint16_t type;            /* OFPAT_SET_QUEUE. */
+    uint16_t len;             /* Len is 8. */
+    uint32_t queue_id;        /* Queue id for the packets. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_set_queue) == 8);
+
+struct ofp_queue_stats_request {
+    uint32_t port_no;        /* All ports if OFPP_ANY. */
+    uint32_t queue_id;       /* All queues if OFPQ_ALL. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats_request) == 8);
+
+struct ofp_queue_stats {
+    uint32_t port_no;
+    uint32_t queue_id;       /* Queue i.d */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t tx_errors;      /* Number of packets dropped due to overrun. */
+    uint32_t duration_sec;   /* Time queue has been alive in seconds. */
+    uint32_t duration_nsec;  /* Time queue has been alive in nanoseconds beyond
+                                duration_sec. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats) == 40);
+
+/* Configures the "role" of the sending controller.  The default role is:
+ *
+ *    - Equal (NX_ROLE_EQUAL), which allows the controller access to all
+ *      OpenFlow features. All controllers have equal responsibility.
+ *
+ * The other possible roles are a related pair:
+ *
+ *    - Master (NX_ROLE_MASTER) is equivalent to Equal, except that there may
+ *      be at most one Master controller at a time: when a controller
+ *      configures itself as Master, any existing Master is demoted to the
+ *      Slave role.
+ *
+ *    - Slave (NX_ROLE_SLAVE) allows the controller read-only access to
+ *      OpenFlow features.  In particular attempts to modify the flow table
+ *      will be rejected with an OFPBRC_EPERM error.
+ *
+ *      Slave controllers do not receive OFPT_PACKET_IN or OFPT_FLOW_REMOVED
+ *      messages, but they do receive OFPT_PORT_STATUS messages.
+ */
+
+/* Controller roles. */
+enum ofp_controller_role {
+    OFPCR_ROLE_NOCHANGE = 0,    /* Don't change current role. */
+    OFPCR_ROLE_EQUAL    = 1,    /* Default role, full access. */
+    OFPCR_ROLE_MASTER   = 2,    /* Full access, at most one master. */
+    OFPCR_ROLE_SLAVE    = 3,    /* Read-only access. */
+};
+
+/* Role request and reply message. */
+struct ofp_role_request {
+    struct ofp_header header;   /* Type OFPT_ROLE_REQUEST/OFPT_ROLE_REPLY. */
+    uint32_t role;              /* One of NX_ROLE_*. */
+    uint8_t pad[4];             /* Align to 64 bits. */
+    uint64_t generation_id;     /* Master Election Generation Id */
+};
+OFP_ASSERT(sizeof(struct ofp_role_request) == 24);
+
+/* Asynchronous message configuration. */
+struct ofp_async_config {
+    struct ofp_header header;     /* OFPT_GET_ASYNC_REPLY or OFPT_SET_ASYNC. */
+    uint32_t packet_in_mask[2];   /* Bitmasks of OFPR_* values. */
+    uint32_t port_status_mask[2]; /* Bitmasks of OFPPR_* values. */
+    uint32_t flow_removed_mask[2];/* Bitmasks of OFPRR_* values. */
+};
+OFP_ASSERT(sizeof(struct ofp_async_config) == 32);
+
+#endif /* openflow/openflow.h */
diff --git a/generic_utils.py b/generic_utils.py
new file mode 100644
index 0000000..cebfb7f
--- /dev/null
+++ b/generic_utils.py
@@ -0,0 +1,75 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Generic utilities
+
+Intended to be imported into another namespace
+"""
+
+import sys
+import of_g
+
+
+################################################################
+#
+# Configuration related
+#
+################################################################
+
+def config_check(str, dictionary = of_g.code_gen_config):
+    """
+    Return config value if in dictionary; else return False.
+    @param str The lookup index
+    @param dictionary The dict to check; use code_gen_config if None
+    """
+
+    if str in dictionary:
+        return dictionary[str]
+
+    return False
+
+################################################################
+#
+# Debug
+#
+################################################################
+
+def debug(obj):
+    """
+    Debug output to the current both the log file and debug output
+    @param out_str The stringified output to write
+    """
+    of_g.loxigen_dbg_file.write(str(obj) + "\n")
+    log(obj)
+
+def log(obj):
+    """
+    Log output to the current global log file
+    @param out_str The stringified output to write
+    """
+    of_g.loxigen_log_file.write(str(obj) + "\n")
diff --git a/lang_c.py b/lang_c.py
new file mode 100644
index 0000000..4addf0a
--- /dev/null
+++ b/lang_c.py
@@ -0,0 +1,109 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief C language specific LOXI generating configuration
+
+This language specific file defines a dictionary 'targets' that
+defines the generated files and the functions used to generate them.
+"""
+
+import os
+import c_gen.c_code_gen as c_code_gen
+import c_gen.c_test_gen as c_test_gen
+import c_gen.c_dump_gen as c_dump_gen
+import c_gen.c_show_gen as c_show_gen
+import c_gen.c_validator_gen as c_validator_gen
+import c_gen.util
+
+def static(out, name):
+    c_gen.util.render_template(out, os.path.basename(name))
+
+targets = {
+    # LOCI headers
+    'loci/inc/loci/loci.h': c_code_gen.top_h_gen,
+    'loci/inc/loci/loci_idents.h': c_code_gen.identifiers_gen,
+    'loci/inc/loci/loci_base.h': c_code_gen.base_h_gen,
+    'loci/inc/loci/of_match.h': c_code_gen.match_h_gen,
+    'loci/inc/loci/loci_doc.h': c_code_gen.gen_accessor_doc,
+    'loci/inc/loci/loci_obj_dump.h': c_dump_gen.gen_obj_dump_h,
+    'loci/inc/loci/loci_obj_show.h': c_show_gen.gen_obj_show_h,
+    'loci/inc/loci/loci_validator.h': c_validator_gen.gen_h,
+
+    # Static LOCI headers
+    'loci/inc/loci/bsn_ext.h': static,
+    'loci/inc/loci/loci_dox.h': static,
+    'loci/inc/loci/loci_dump.h': static,
+    'loci/inc/loci/loci_show.h': static,
+    'loci/inc/loci/of_buffer.h': static,
+    'loci/inc/loci/of_doc.h': static,
+    'loci/inc/loci/of_message.h': static,
+    'loci/inc/loci/of_object.h': static,
+    'loci/inc/loci/of_utils.h': static,
+    'loci/inc/loci/of_wire_buf.h': static,
+
+    # LOCI code
+    'loci/src/loci.c': c_code_gen.top_c_gen,
+    'loci/src/of_type_data.c': c_code_gen.type_data_c_gen,
+    'loci/src/of_match.c': c_code_gen.match_c_gen,
+    'loci/src/loci_obj_dump.c': c_dump_gen.gen_obj_dump_c,
+    'loci/src/loci_obj_show.c': c_show_gen.gen_obj_show_c,
+    'loci/src/loci_validator.c': c_validator_gen.gen_c,
+
+    # Static LOCI code
+    'loci/src/loci_int.h': static,
+    'loci/src/loci_log.c': static,
+    'loci/src/loci_log.h': static,
+    'loci/src/of_object.c': static,
+    'loci/src/of_type_maps.c': static,
+    'loci/src/of_utils.c': static,
+    'loci/src/of_wire_buf.c': static,
+
+    # Static LOCI documentation
+    'loci/README': static,
+    'loci/Doxyfile': static,
+
+    # locitest code
+    'locitest/inc/locitest/of_dup.h': c_test_gen.dup_h_gen,
+    'locitest/inc/locitest/test_common.h': c_test_gen.gen_common_test_header,
+    'locitest/src/of_dup.c': c_test_gen.dup_c_gen,
+    'locitest/src/test_common.c': c_test_gen.gen_common_test,
+    'locitest/src/test_list.c': c_test_gen.gen_list_test,
+    'locitest/src/test_match.c': c_test_gen.gen_match_test,
+    'locitest/src/test_msg.c': c_test_gen.gen_msg_test,
+    'locitest/src/test_scalar_acc.c': c_test_gen.gen_message_scalar_test,
+    'locitest/src/test_uni_acc.c': c_test_gen.gen_unified_accessor_tests,
+
+    # Static locitest code
+    'locitest/inc/locitest/unittest.h': static,
+    'locitest/src/test_ext.c': static,
+    'locitest/src/test_list_limits.c': static,
+    'locitest/src/test_match_utils.c': static,
+    'locitest/src/test_setup_from_add.c': static,
+    'locitest/src/test_utils.c': static,
+    'locitest/src/test_validator.c': static,
+}
diff --git a/lang_python.py b/lang_python.py
new file mode 100644
index 0000000..7c4a0e3
--- /dev/null
+++ b/lang_python.py
@@ -0,0 +1,88 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+Python backend for LOXI
+
+This language specific file defines a dictionary 'targets' that
+defines the generated files and the functions used to generate them.
+
+For each generated file there is a generate_* function in py_gen.codegen
+and a Tenjin template under py_gen/templates.
+
+Target directory structure:
+    pyloxi:
+        loxi:
+            __init__.py
+            of10:
+                __init__.py
+                action.py       # Action classes
+                common.py       # Structs shared by multiple messages
+                const.py        # OpenFlow constants
+                message.py      # Message classes
+                util.py         # Utility functions
+            of12: ...
+            of13: ...
+
+The user will add the pyloxi directory to PYTHONPATH. Then they can
+"import loxi" or "import loxi.of10". The idiomatic import is
+"import loxi.of10 as ofp". The protocol modules (e.g. of10) import
+all of their submodules, so the user can access "ofp.message" without
+further imports. The protocol modules also import the constants from
+the const module directly into their namespace so the user can access
+"ofp.OFPP_NONE".
+"""
+
+import py_gen
+import py_gen.util
+import py_gen.codegen
+
+versions = {
+    1: "of10",
+}
+
+prefix = 'pyloxi/loxi'
+
+modules = ["action", "common", "const", "message", "util"]
+
+def make_gen(name, version):
+    fn = getattr(py_gen.codegen, "generate_" + name)
+    return lambda out, name: fn(out, name, version)
+
+def static(template_name):
+    return lambda out, name: py_gen.util.render_template(out, template_name)
+
+targets = {
+    prefix+'/__init__.py': static('toplevel_init.py'),
+    prefix+'/pp.py': static('pp.py'),
+}
+
+for version, subdir in versions.items():
+    targets['%s/%s/__init__.py' % (prefix, subdir)] = make_gen('init', version)
+    for module in modules:
+        filename = '%s/%s/%s.py' % (prefix, subdir, module)
+        targets[filename] = make_gen(module, version)
diff --git a/loxi_front_end/__init__.py b/loxi_front_end/__init__.py
new file mode 100644
index 0000000..5e4e379
--- /dev/null
+++ b/loxi_front_end/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
diff --git a/loxi_front_end/c_parse_utils.py b/loxi_front_end/c_parse_utils.py
new file mode 100644
index 0000000..f06904b
--- /dev/null
+++ b/loxi_front_end/c_parse_utils.py
@@ -0,0 +1,166 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief Utilities related to parsing C files
+#
+import re
+import sys
+import os
+import of_g
+
+def type_dec_to_count_base(m_type):
+    """
+    Resolve a type declaration like uint8_t[4] to a count (4) and base_type
+    (uint8_t)
+
+    @param m_type The string type declaration to process
+    """
+    count = 1
+    chk_ar = m_type.split('[')
+    if len(chk_ar) > 1:
+        count_str = chk_ar[1].split(']')[0]
+        if count_str in of_g.ofp_constants:
+            count = of_g.ofp_constants[count_str]
+        else:
+            count = int(count_str)
+        base_type = chk_ar[0]
+    else:
+        base_type = m_type
+    return count, base_type
+
+def comment_remover(text):
+    """
+    Remove C and C++ comments from text
+    @param text Possibly multiline string of C code.
+
+    http://stackoverflow.com/questions/241327/python-snippet-to-remove-c-and-c-comments
+    """
+
+    def replacer(match):
+        s = match.group(0)
+        if s.startswith('/'):
+            return ""
+        else:
+            return s
+    pattern = re.compile(
+        r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',
+        re.DOTALL | re.MULTILINE
+    )
+    return re.sub(pattern, replacer, text)
+
+
+def clean_up_input(text):
+    text = comment_remover(text)
+    text_lines = text.splitlines()
+    all_lines = []
+    for line in text_lines:
+        line = re.sub("\t", " ", line) # get rid of tabs
+        line = re.sub(" +$", "", line) # Strip trailing blanks
+        if len(line):
+            all_lines.append(line)
+    text = "\n".join(all_lines)
+    return text
+
+def extract_structs(contents):
+    """
+    Extract the structures from raw C code input
+    @param contents The text of the original C code
+    """
+    contents = clean_up_input(contents)
+    struct_list = re.findall("struct .* \{[^}]+\};", contents)
+    return struct_list
+
+def extract_enums(contents):
+    """
+    Extract the enums from raw C code input
+    @param contents The text of the original C code
+    @return An array where each entry is an (unparsed) enum instance
+    """
+    contents = clean_up_input(contents)
+    enum_list = re.findall("enum .* \{[^}]+\};", contents)
+    return enum_list
+
+def extract_enum_vals(enum):
+    """
+    From a C enum, return a pair (name, values)
+    @param enum The C syntax enumeration
+    @returns (name, values), see below
+
+    name is the enum name
+    values is a list pairs (<ident>, <value>) where ident is the
+    identifier and value is the associated value.
+
+    The values are integers when possible, otherwise strings
+    """
+
+    rv_list = []
+    name = re.search("enum +(\w+)", enum).group(1)
+    lines = " ".join(enum.split("\n"))
+    body = re.search("\{(.+)\}", lines).group(1)
+    entries = body.split(",")
+    previous_value = -1
+    for m in entries:
+        if re.match(" *$", m): # Empty line
+            continue
+        # Parse with = first
+        search_obj = re.match(" +(\w+) *= *(.*) *", m)
+        if search_obj:  # Okay, had =
+            e_name = search_obj.group(1)
+            e_value = search_obj.group(2)
+        else: # No equals
+            search_obj = re.match(" +(\w+)", m)
+            if not search_obj:
+                sys.stderr.write("\nError extracting enum for %s, member %s\n"
+                                 % (name, m))
+                sys.exit(1)
+            e_name = search_obj.group(1)
+            e_value = previous_value + 1
+        rv_list.append([e_name, e_value])
+
+        if type(e_value) is type(0):
+            previous_value = e_value
+        else:
+            try:
+                previous_value = int(e_value, 0)
+            except ValueError:
+                pass
+    return (name, rv_list)
+
+def extract_defines(contents):
+    """
+    Returns a list of pairs (<identifier>, <value>) where
+    #define <ident> <value> appears in the file
+    """
+    rv_list = []
+    contents = clean_up_input(contents)
+    define_list = re.findall("\#define +[^ ]+ .*\n", contents, re.M)
+    for entry in define_list:
+        match_obj = re.match("#define +([^ ]+) +(.+)$", entry)
+        rv_list.append([match_obj.group(1),match_obj.group(2)])
+    return rv_list
+        
diff --git a/loxi_front_end/flags.py b/loxi_front_end/flags.py
new file mode 100644
index 0000000..3c401f9
--- /dev/null
+++ b/loxi_front_end/flags.py
@@ -0,0 +1,76 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+This file needs significant work and normalization.  We need a better
+representation for flags, associating them to variables, and associating
+them to OF versions.
+
+@fixme Most of this will be going away soon
+"""
+
+import sys
+import copy
+import type_maps
+import of_g
+import re
+
+# These mark idents as _not_ flags and have precedence
+non_flag_rules = [
+    "OF_CONFIG_FRAG_NORMAL",
+    "OF_FLOW_MOD_FAILED_BAD_FLAGS",
+    "OF_SWITCH_CONFIG_FAILED_BAD_FLAGS",
+    "OF_PORT_STATE_FLAG_STP_LISTEN",
+    "OF_TABLE_CONFIG_TABLE_MISS_CONTROLLER",
+    ]
+
+# These mark idents as flags
+flag_rules = [
+    "OF_CONFIG_",
+    "OF_TABLE_CONFIG_",
+    ]
+
+def ident_is_flag(ident):
+    """
+    Return True if ident should be treated as a flag
+    """
+
+    # Do negative matches first
+    for entry in non_flag_rules:
+        if re.match(entry, ident):
+            return False
+
+    # General rule, if it says flag it is (unless indicated above)
+    if ident.find("FLAG") >= 0:
+        return True
+
+    for entry in flag_rules:
+        if re.match(entry, ident):
+            return True
+
+    return False
+    
diff --git a/loxi_front_end/identifiers.py b/loxi_front_end/identifiers.py
new file mode 100644
index 0000000..a4122ee
--- /dev/null
+++ b/loxi_front_end/identifiers.py
@@ -0,0 +1,99 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief Process identifiers for updating of_g.identifiers
+#
+
+import sys
+import of_h_utils
+from generic_utils import *
+import of_g
+
+##
+# The value to use when an identifier is not defined for a version
+UNDEFINED_IDENT_VALUE = 0
+
+def add_identifiers(all_idents, idents_by_group, version, contents):
+    """
+    Update all_idents with identifiers from an openflow.h header file
+    @param all_idents A dict, usually of_g.identifiers
+    @param idents_by_group A dict for mapping LOXI idents to groups,
+    usually of_g.identifiers_by_group
+    @param version The OF wire version
+    @param contents The contents of an openflow.h file
+    """
+
+    # Get the dictionary of enums from the file text
+    enum_dict = of_h_utils.get_enum_dict(version,
+                                         contents)
+    for name, info in enum_dict.items():
+
+        # info is a DotDict w/ keys ofp_name, ofp_group and value.
+        if name in all_idents:
+            all_idents[name]["values_by_version"][version] = info.value
+            if ((all_idents[name]["ofp_name"] != info.ofp_name or 
+                all_idents[name]["ofp_group"] != info.ofp_group) and
+                info.ofp_name.find("OFPP_") != 0):
+                log("""
+NOTE: Identifier %s has different ofp name or group in version %s
+From ofp name %s, group %s to name %s, group %s.
+This could indicate a name collision in LOXI identifier translation.
+""" % (name, str(version), all_idents[name]["ofp_name"],
+       all_idents[name]["ofp_group"], info.ofp_name, info.ofp_group))
+                # Update stuff assuming newer versions processed later
+                all_idents[name]["ofp_name"] = info.ofp_name
+                all_idents[name]["ofp_group"] = info.ofp_group
+
+        else: # New name
+            all_idents[name] = dict(
+                values_by_version = {version:info.value},
+                common_value = info["value"],
+                ofp_name = info["ofp_name"],
+                ofp_group = info["ofp_group"]
+                )
+            if info["ofp_group"] not in idents_by_group:
+                idents_by_group[info["ofp_group"]] = []
+            if name not in idents_by_group[info["ofp_group"]]:
+                idents_by_group[info["ofp_group"]].append(name)
+
+def all_versions_agree(all_idents, version_list, name):
+    val_list = all_idents[name]["values_by_version"]
+    for version in version_list:
+        if not version in val_list:
+            return False
+        if str(val_list[version]) != str(all_idents[name]["common_value"]):
+            return False
+    return True
+
+def defined_versions_agree(all_idents, version_list, name):
+    val_list = all_idents[name]["values_by_version"]
+    for version in version_list:
+        if version in val_list:
+            if str(val_list[version]) != str(all_idents[name]["common_value"]):
+                return False
+    return True
diff --git a/loxi_front_end/match.py b/loxi_front_end/match.py
new file mode 100644
index 0000000..4ab3126
--- /dev/null
+++ b/loxi_front_end/match.py
@@ -0,0 +1,488 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+# @brief Match data representation
+#
+# @fixme This still has lots of C specific code that should be moved into c_gen
+
+import sys
+import of_g
+from generic_utils import *
+import oxm
+import loxi_utils.loxi_utils as loxi_utils
+
+#
+# Use 1.2 match semantics for common case
+#
+# Generate maps between generic match and version specific matches
+# Generate dump functions for generic match
+# Generate dump functions for version specific matches
+
+## @var of_match_members
+# The dictionary from unified match members to type and indexing info
+#
+# Keys:
+#   name The unified name used for the member
+#   m_type The data type used for the object in unified structure
+#   print_type The id to use when printing
+#   conditions The condition underwhich the field could occur TBD
+#   takes_mask_in_spec Shown as taking mask in OF 1.2 spec; IGNORED NOW
+#   order Used to define an order for readability
+#   v1_wc_shift The WC shift in OF 1.0
+#   v2_wc_shift The WC shift in OF 1.1
+#
+# Unless noted otherwise, class is 0x8000, OFPXMC_OPENFLOW_BASIC
+# We use the 1.2 names and alias older names
+# Conditions:
+#  is_ipv4(_m):  ((_m)->eth_type == 0x0800)
+#  is_ipv6(_m):  ((_m)->eth_type == 0x86dd)
+#  is_ip(_m):    (is_ipv4(_m) || is_ipv6(_m))
+#  is_arp(_m):   ((_m)->eth_type == 0x0806)
+#  is_tcp(_m):   (is_ip(_m) && ((_m)->ip_proto == 6))
+#  is_udp(_m):   (is_ip(_m) && ((_m)->ip_proto == 17))
+#  is_sctp(_m):  (is_ip(_m) && ((_m)->ip_proto == 132))
+#  is_icmpv4(_m):  (is_ipv4(_m) && ((_m)->ip_proto == 1))
+#  is_icmpv6(_m):  (is_ipv6(_m) && ((_m)->ip_proto == 58))
+#
+
+of_match_members = dict(
+    in_port = dict(
+        name="in_port",
+        m_type="of_port_no_t",
+        print_type="PRIx32",
+        conditions="",
+        v1_wc_shift=0,
+        v2_wc_shift=0,
+        takes_mask_in_spec=False,
+        order=100,
+        ),
+    in_phy_port = dict(
+        name="in_phy_port",
+        m_type="of_port_no_t",
+        print_type="PRIx32",
+        conditions="", # OXM_OF_IN_PORT must be present
+        takes_mask_in_spec=False,
+        order=101,
+        ),
+    metadata = dict(
+        name="metadata",
+        m_type="uint64_t",
+        print_type="PRIx64",
+        conditions="",
+        takes_mask_in_spec=True,
+        order=102,
+        ),
+
+    eth_dst = dict(
+        name="eth_dst",
+        m_type="of_mac_addr_t",
+        v1_wc_shift=3,
+        print_type="\"p\"",
+        conditions="",
+        takes_mask_in_spec=True,
+        order=200,
+        ),
+    eth_src = dict(
+        name="eth_src",
+        m_type="of_mac_addr_t",
+        v1_wc_shift=2,
+        print_type="\"p\"",
+        conditions="",
+        takes_mask_in_spec=True,
+        order=201,
+        ),
+    eth_type = dict(
+        name="eth_type",
+        m_type="uint16_t",
+        v1_wc_shift=4,
+        v2_wc_shift=3,
+        print_type="PRIx16",
+        conditions="",
+        takes_mask_in_spec=False,
+        order=203,
+        ),
+    vlan_vid = dict(  # FIXME: Semantics changed in 1.2
+        # Use CFI bit to indicate tag presence
+        name="vlan_vid",
+        m_type="uint16_t",
+        v1_wc_shift=1,
+        v2_wc_shift=1,
+        print_type="PRIx16",
+        conditions="",
+        takes_mask_in_spec=True,
+        order=210,
+        ),
+    vlan_pcp = dict(
+        name="vlan_pcp",
+        m_type="uint8_t",
+        v1_wc_shift=20,
+        v2_wc_shift=2,
+        print_type="PRIx8",
+        conditions="",
+        takes_mask_in_spec=False,
+        order=211,
+        ),
+    ipv4_src = dict(
+        name="ipv4_src",
+        m_type="uint32_t",
+        v1_wc_shift=8,
+        print_type="PRIx32",
+        conditions="is_ipv4(match)",
+        takes_mask_in_spec=True,
+        order=300,
+        ),
+    ipv4_dst = dict(
+        name="ipv4_dst",
+        m_type="uint32_t",
+        v1_wc_shift=14,
+        print_type="PRIx32",
+        conditions="is_ipv4(match)",
+        takes_mask_in_spec=True,
+        order=301,
+        ),
+    ip_dscp = dict(
+        name="ip_dscp",
+        m_type="uint8_t",
+        v1_wc_shift=21,
+        v2_wc_shift=4,
+        print_type="PRIx8",
+        conditions="is_ip(match)",
+        takes_mask_in_spec=False,
+        order=310,
+        ),
+    ip_ecn = dict(
+        name="ip_ecn",
+        m_type="uint8_t",
+        print_type="PRIx8",
+        conditions="is_ip(match)",
+        takes_mask_in_spec=False,
+        order=311,
+        ),
+    ip_proto = dict(
+        name="ip_proto",
+        m_type="uint8_t",
+        v1_wc_shift=5,
+        v2_wc_shift=5,
+        print_type="PRIx8",
+        conditions="is_ip(match)",
+        takes_mask_in_spec=False,
+        order=320,
+        ),
+
+    tcp_dst = dict(
+        name="tcp_dst",
+        m_type="uint16_t",
+        v1_wc_shift=7,
+        v2_wc_shift=7,
+        print_type="PRIx16",
+        conditions="is_tcp(match)",
+        takes_mask_in_spec=False,
+        order=400,
+        ),
+    tcp_src = dict(
+        name="tcp_src",
+        m_type="uint16_t",
+        v1_wc_shift=6,
+        v2_wc_shift=6,
+        print_type="PRIx16",
+        conditions="is_tcp(match)",
+        takes_mask_in_spec=False,
+        order=401,
+        ),
+
+    udp_dst = dict(
+        name="udp_dst",
+        m_type="uint16_t",
+        print_type="PRIx16",
+        conditions="is_udp(match)",
+        takes_mask_in_spec=False,
+        order=410,
+        ),
+    udp_src = dict(
+        name="udp_src",
+        m_type="uint16_t",
+        print_type="PRIx16",
+        conditions="is_udp(match)",
+        takes_mask_in_spec=False,
+        order=411,
+        ),
+
+    sctp_dst = dict(
+        name="sctp_dst",
+        m_type="uint16_t",
+        print_type="PRIx16",
+        conditions="is_sctp(match)",
+        takes_mask_in_spec=False,
+        order=420,
+        ),
+    sctp_src = dict(
+        name="sctp_src",
+        m_type="uint16_t",
+        print_type="PRIx16",
+        conditions="is_sctp(match)",
+        takes_mask_in_spec=False,
+        order=421,
+        ),
+
+    icmpv4_type = dict(
+        name="icmpv4_type",
+        m_type="uint8_t",
+        print_type="PRIx8",
+        conditions="is_icmp_v4(match)",
+        takes_mask_in_spec=False,
+        order=430,
+        ),
+    icmpv4_code = dict(
+        name="icmpv4_code",
+        m_type="uint8_t",
+        print_type="PRIx8",
+        conditions="is_icmp_v4(match)",
+        takes_mask_in_spec=False,
+        order=431,
+        ),
+
+    arp_op = dict(
+        name="arp_op",
+        m_type="uint16_t",
+        print_type="PRIx16",
+        conditions="is_arp(match)",
+        takes_mask_in_spec=False,
+        order=250,
+        ),
+
+    arp_spa = dict(
+        name="arp_spa",
+        m_type="uint32_t",
+        print_type="PRIx32",
+        conditions="is_arp(match)",
+        takes_mask_in_spec=True,
+        order=251,
+        ),
+    arp_tpa = dict(
+        name="arp_tpa",
+        m_type="uint32_t",
+        print_type="PRIx32",
+        conditions="is_arp(match)",
+        takes_mask_in_spec=True,
+        order=252,
+        ),
+
+    arp_sha = dict(
+        name="arp_sha",
+        m_type="of_mac_addr_t",
+        print_type="\"p\"",
+        conditions="is_arp(match)",
+        takes_mask_in_spec=False,
+        order=253,
+        ),
+    arp_tha = dict(
+        name="arp_tha",
+        m_type="of_mac_addr_t",
+        print_type="\"p\"",
+        conditions="is_arp(match)",
+        takes_mask_in_spec=False,
+        order=254,
+        ),
+
+    ipv6_src = dict(
+        name="ipv6_src",
+        m_type="of_ipv6_t",
+        print_type="\"p\"",
+        conditions="is_ipv6(match)",
+        takes_mask_in_spec=True,
+        order=350,
+        ),
+    ipv6_dst = dict(
+        name="ipv6_dst",
+        m_type="of_ipv6_t",
+        print_type="\"p\"",
+        conditions="is_ipv6(match)",
+        takes_mask_in_spec=True,
+        order=351,
+        ),
+
+    ipv6_flabel = dict(
+        name="ipv6_flabel",
+        m_type="uint32_t",
+        print_type="PRIx32",
+        conditions="is_ipv6(match)",
+        takes_mask_in_spec=False, # Comment in openflow.h says True
+        order=360,
+        ),
+
+    icmpv6_type = dict(
+        name="icmpv6_type",
+        m_type="uint8_t",
+        print_type="PRIx8",
+        conditions="is_icmp_v6(match)",
+        takes_mask_in_spec=False,
+        order=440,
+        ),
+    icmpv6_code = dict(
+        name="icmpv6_code",
+        m_type="uint8_t",
+        print_type="PRIx8",
+        conditions="is_icmp_v6(match)",
+        takes_mask_in_spec=False,
+        order=441,
+        ),
+
+    ipv6_nd_target = dict(
+        name="ipv6_nd_target",
+        m_type="of_ipv6_t",
+        print_type="\"p\"",
+        conditions="", # fixme
+        takes_mask_in_spec=False,
+        order=442,
+        ),
+
+    ipv6_nd_sll = dict(
+        name="ipv6_nd_sll",
+        m_type="of_mac_addr_t",
+        print_type="\"p\"",
+        conditions="", # fixme
+        takes_mask_in_spec=False,
+        order=443,
+        ),
+    ipv6_nd_tll = dict(
+        name="ipv6_nd_tll",
+        m_type="of_mac_addr_t",
+        print_type="\"p\"",
+        conditions="", # fixme
+        takes_mask_in_spec=False,
+        order=444,
+        ),
+
+    mpls_label = dict(
+        name="mpls_label",
+        m_type="uint32_t",
+        v2_wc_shift=8,
+        print_type="PRIx32",
+        conditions="",
+        takes_mask_in_spec=False,
+        order=500,
+        ),
+    mpls_tc = dict(
+        name="mpls_tc",
+        m_type="uint8_t",
+        v2_wc_shift=9,
+        print_type="PRIx8",
+        conditions="",
+        takes_mask_in_spec=False,
+        order=501,
+        ),
+)
+
+match_keys_sorted = of_match_members.keys()
+match_keys_sorted.sort(key=lambda entry:of_match_members[entry]["order"])
+
+of_v1_keys = [
+    "eth_dst",
+    "eth_src",
+    "eth_type",
+    "in_port",
+    "ipv4_dst",
+    "ip_proto",
+    "ipv4_src",
+    "ip_dscp",
+    "tcp_dst",  # Means UDP too for 1.0 and 1.1
+    "tcp_src",  # Means UDP too for 1.0 and 1.1
+    "vlan_pcp",
+    "vlan_vid"
+    ]
+
+of_v2_keys = [
+    "eth_dst",
+    "eth_src",
+    "eth_type",
+    "in_port",
+    "ipv4_dst",
+    "ip_proto",
+    "ipv4_src",
+    "ip_dscp",
+    "tcp_dst",  # Means UDP too for 1.0 and 1.1
+    "tcp_src",  # Means UDP too for 1.0 and 1.1
+    "vlan_pcp",
+    "vlan_vid",
+    "mpls_label",
+    "mpls_tc",
+    "metadata"
+    ]
+
+of_v2_full_mask = [
+    "eth_dst",
+    "eth_src",
+    "ipv4_dst",
+    "ipv4_src",
+    "metadata"
+    ]
+
+def oxm_index(key):
+    """
+    What's the index called for a match key
+    """
+    return "OF_OXM_INDEX_" + key.upper()
+
+##
+# Check that all members in the hash are recognized as match keys
+def match_sanity_check():
+    count = 0
+    for match_v in ["of_match_v1", "of_match_v2"]:
+        count += 1
+        for mm in of_g.unified[match_v][count]["members"]:
+            key = mm["name"]
+            if key.find("_mask") >= 0:
+                continue
+            if loxi_utils.skip_member_name(key):
+                continue
+            if key == "wildcards":
+                continue
+            if not key in of_match_members:
+                print "Key %s not found in match struct, v %s" % (key, match_v)
+                sys.exit(1)
+
+    # Check oxm list and the list above
+    for key in oxm.oxm_types:
+        if not key in of_match_members:
+            if not (key.find("_masked") > 0):
+                debug("Key %s in oxm.oxm_types, not of_match_members" % key)
+                sys.exit(1)
+            if not key[:-7] in of_match_members:
+                debug("Key %s in oxm.oxm_types, but %s not in of_match_members"
+                      % (key, key[:-7]))
+                sys.exit(1)
+
+    for key in of_match_members:
+        if not key in oxm.oxm_types:
+            debug("Key %s in of_match_members, not in oxm.oxm_types" % key)
+            sys.exit(1)
+        if of_match_members[key]["m_type"] != oxm.oxm_types[key]:
+            debug("Type mismatch for key %s in oxm data: %s vs %s" %
+                  (key, of_match_members[key]["m_type"], oxm.oxm_types[key]))
+            sys.exit(1)
+
+
diff --git a/loxi_front_end/of_h_utils.py b/loxi_front_end/of_h_utils.py
new file mode 100644
index 0000000..1259a90
--- /dev/null
+++ b/loxi_front_end/of_h_utils.py
@@ -0,0 +1,154 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief Utilities related to processing OF header files
+#
+
+import re
+import sys
+import os
+import c_parse_utils
+import loxi_utils.py_utils as py_utils
+import translation
+from generic_utils import *
+import copy
+
+# Silently ignore idents matching any of these
+ignore_idents = [
+    "OFPXMT_OFB_", "OFPFMF_", "OXM_OF_", "OFPXMT_",
+    "OPENFLOW_OPENFLOW_H", "OFP_ASSERT", "OFP_PACKED", "OFP_VERSION"
+    ]
+
+def warn_unmapped_ident(ident, t_name="enum"):
+    for ignore_ident in ignore_idents:
+        if re.match(ignore_ident, ident):
+            return
+    log("Skipping ident %s. Did not map %s identifier to LOXI" %
+              (ident, t_name))
+
+def fixup_values(ident, value, version, ident_list):
+    """
+    Fix up values for LOXI reasons
+
+    Translate defintions that refer to other identifiers
+    This is really just needed for the special case of OFPFW in 1.0 and 1.1.
+
+    Also, LOXI is aware of the change in type of port numbers, so 
+    translate those here.
+    """
+    value_string = str(value).strip()
+    if ident.find("OFPP_") == 0 and version == of_g.VERSION_1_0:
+        value_string = "0x%x" % (int(value, 0) + 0xffff0000)
+
+    # Otherwise, if no reference to a wildcard value, all done
+    if value_string.find("OFPFW") < 0:
+        return value_string
+    for ident, id_value in ident_list.items():
+        id_value_string = "(" + str(id_value).strip() + ")"
+        # If the identifier has (, etc., ignore it; not handling params
+        if ident.find("(") >= 0:
+            continue
+        value_string = re.sub(ident, id_value_string, value_string)
+    return value_string
+
+def get_enum_dict(version, contents):
+    """
+    Given openflow.h input, create a dict for its enums
+    @param contents The raw contents of the C file
+
+    The dict returned is indexed by LOXI identifier.  Each entry is a
+    DotDict with three keys, value, ofp_name and ofp_group.  The value is an 
+    int value when possible, otherwise a string.  The ofp_name is the original
+    name from the openflow header file.  The ofp_group is the enum type.
+    """
+    rv_list = {}
+
+    version_ref = of_g.short_version_names[version]
+    enum_list = c_parse_utils.extract_enums(contents)
+    defines_list = c_parse_utils.extract_defines(contents)
+
+    # First generate a list of all original idents and values for translation
+    full_ident_list = {}
+
+    for enum in enum_list:
+        (name, values) = c_parse_utils.extract_enum_vals(enum)
+        for (ident, value) in values:
+            full_ident_list[ident] = str(value).strip()
+    for ident, value in defines_list:
+        full_ident_list[ident] = str(value).strip()
+
+    # Process enum idents
+    for enum in enum_list:
+        (name, values) = c_parse_utils.extract_enum_vals(enum)
+        for (ident, value) in values:
+            loxi_name = translation.loxi_name(ident)
+            if not loxi_name:
+                warn_unmapped_ident(ident)
+                continue
+            if loxi_name in rv_list:
+                sys.stderr.write("\nError: %s in ident list already\n" % 
+                                 loxi_name)
+                sys.exit(1)
+
+            value_str = fixup_values(ident, value, version, full_ident_list)
+            log("Adding LOXI identifier %s from name %s" % (loxi_name, ident))
+            rv_list[loxi_name] = py_utils.DotDict(dict(
+                ofp_name = ident,
+                ofp_group = name,
+                value = value_str))
+
+    for ident, value in defines_list:
+        loxi_name = translation.loxi_name(ident)
+        if not loxi_name:
+            warn_unmapped_ident(ident, "macro defn")
+            continue
+
+        value_str = fixup_values(ident, value, version, full_ident_list)
+        if loxi_name in rv_list:
+            if value_str != rv_list[loxi_name].value:
+                sys.stderr.write("""
+ERROR: IDENT COLLISION.  Version %s, LOXI Ident %s.
+New ofp_name %s, value %s.
+Previous ofp_name %s, value %s,
+""" % (version_ref, loxi_name, ident, value_str,
+       rv_list[loxi_name].ofp_name),  rv_list[loxi_name].value)
+                sys.exit(1)
+            else:
+                log("Ignoring redundant entry %s, mapping to %s" %
+                          (ident, loxi_name))
+
+        rv_list[loxi_name] = py_utils.DotDict(dict(
+                ofp_name = ident,
+                ofp_group = "macro_definitions",
+                value = value_str))
+
+    return rv_list
+
+
+
+
diff --git a/loxi_front_end/oxm.py b/loxi_front_end/oxm.py
new file mode 100644
index 0000000..d4cf273
--- /dev/null
+++ b/loxi_front_end/oxm.py
@@ -0,0 +1,228 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+import of_g
+
+oxm_types = dict(
+    in_port               = "of_port_no_t",
+    in_port_masked        = "of_port_no_t",
+    in_phy_port           = "of_port_no_t",
+    in_phy_port_masked    = "of_port_no_t",
+    metadata              = "uint64_t",
+    metadata_masked       = "uint64_t",
+    eth_dst               = "of_mac_addr_t",
+    eth_dst_masked        = "of_mac_addr_t",
+    eth_src               = "of_mac_addr_t",
+    eth_src_masked        = "of_mac_addr_t",
+    eth_type              = "uint16_t",
+    eth_type_masked       = "uint16_t",
+    vlan_vid              = "uint16_t",
+    vlan_vid_masked       = "uint16_t",
+    vlan_pcp              = "uint8_t",
+    vlan_pcp_masked       = "uint8_t",
+    ip_dscp               = "uint8_t",
+    ip_dscp_masked        = "uint8_t",
+    ip_ecn                = "uint8_t",
+    ip_ecn_masked         = "uint8_t",
+    ip_proto              = "uint8_t",
+    ip_proto_masked       = "uint8_t",
+    ipv4_src              = "uint32_t",
+    ipv4_src_masked       = "uint32_t",
+    ipv4_dst              = "uint32_t",
+    ipv4_dst_masked       = "uint32_t",
+    tcp_src               = "uint16_t",
+    tcp_src_masked        = "uint16_t",
+    tcp_dst               = "uint16_t",
+    tcp_dst_masked        = "uint16_t",
+    udp_src               = "uint16_t",
+    udp_src_masked        = "uint16_t",
+    udp_dst               = "uint16_t",
+    udp_dst_masked        = "uint16_t",
+    sctp_src              = "uint16_t",
+    sctp_src_masked       = "uint16_t",
+    sctp_dst              = "uint16_t",
+    sctp_dst_masked       = "uint16_t",
+    icmpv4_type           = "uint8_t",
+    icmpv4_type_masked    = "uint8_t",
+    icmpv4_code           = "uint8_t",
+    icmpv4_code_masked    = "uint8_t",
+    arp_op                = "uint16_t",
+    arp_op_masked         = "uint16_t",
+    arp_spa               = "uint32_t",
+    arp_spa_masked        = "uint32_t",
+    arp_tpa               = "uint32_t",
+    arp_tpa_masked        = "uint32_t",
+    arp_sha               = "of_mac_addr_t",
+    arp_sha_masked        = "of_mac_addr_t",
+    arp_tha               = "of_mac_addr_t",
+    arp_tha_masked        = "of_mac_addr_t",
+    ipv6_src              = "of_ipv6_t",
+    ipv6_src_masked       = "of_ipv6_t",
+    ipv6_dst              = "of_ipv6_t",
+    ipv6_dst_masked       = "of_ipv6_t",
+    ipv6_flabel           = "uint32_t",
+    ipv6_flabel_masked    = "uint32_t",
+    icmpv6_type           = "uint8_t",
+    icmpv6_type_masked    = "uint8_t",
+    icmpv6_code           = "uint8_t",
+    icmpv6_code_masked    = "uint8_t",
+    ipv6_nd_target        = "of_ipv6_t",
+    ipv6_nd_target_masked = "of_ipv6_t",
+    ipv6_nd_sll           = "of_mac_addr_t",
+    ipv6_nd_sll_masked    = "of_mac_addr_t",
+    ipv6_nd_tll           = "of_mac_addr_t",
+    ipv6_nd_tll_masked    = "of_mac_addr_t",
+    mpls_label            = "uint32_t",
+    mpls_label_masked     = "uint32_t",
+    mpls_tc               = "uint8_t",
+    mpls_tc_masked        = "uint8_t"
+    # FIXME Add 1.3 oxm elts
+    )
+
+oxm_wire_type = dict(
+    in_port               = (0 << 1),
+    in_port_masked        = (0 << 1) + 1,
+    in_phy_port           = (1 << 1),
+    in_phy_port_masked    = (1 << 1) + 1,
+    metadata              = (2 << 1),
+    metadata_masked       = (2 << 1) + 1,
+    eth_dst               = (3 << 1),
+    eth_dst_masked        = (3 << 1) + 1,
+    eth_src               = (4 << 1),
+    eth_src_masked        = (4 << 1) + 1,
+    eth_type              = (5 << 1),
+    eth_type_masked       = (5 << 1) + 1,
+    vlan_vid              = (6 << 1),
+    vlan_vid_masked       = (6 << 1) + 1,
+    vlan_pcp              = (7 << 1),
+    vlan_pcp_masked       = (7 << 1) + 1,
+    ip_dscp               = (8 << 1),
+    ip_dscp_masked        = (8 << 1) + 1,
+    ip_ecn                = (9 << 1),
+    ip_ecn_masked         = (9 << 1) + 1,
+    ip_proto              = (10 << 1),
+    ip_proto_masked       = (10 << 1) + 1,
+    ipv4_src              = (11 << 1),
+    ipv4_src_masked       = (11 << 1) + 1,
+    ipv4_dst              = (12 << 1),
+    ipv4_dst_masked       = (12 << 1) + 1,
+    tcp_src               = (13 << 1),
+    tcp_src_masked        = (13 << 1) + 1,
+    tcp_dst               = (14 << 1),
+    tcp_dst_masked        = (14 << 1) + 1,
+    udp_src               = (15 << 1),
+    udp_src_masked        = (15 << 1) + 1,
+    udp_dst               = (16 << 1),
+    udp_dst_masked        = (16 << 1) + 1,
+    sctp_src              = (17 << 1),
+    sctp_src_masked       = (17 << 1) + 1,
+    sctp_dst              = (18 << 1),
+    sctp_dst_masked       = (18 << 1) + 1,
+    icmpv4_type           = (19 << 1),
+    icmpv4_type_masked    = (19 << 1) + 1,
+    icmpv4_code           = (20 << 1),
+    icmpv4_code_masked    = (20 << 1) + 1,
+    arp_op                = (21 << 1),
+    arp_op_masked         = (21 << 1) + 1,
+    arp_spa               = (22 << 1),
+    arp_spa_masked        = (22 << 1) + 1,
+    arp_tpa               = (23 << 1),
+    arp_tpa_masked        = (23 << 1) + 1,
+    arp_sha               = (24 << 1),
+    arp_sha_masked        = (24 << 1) + 1,
+    arp_tha               = (25 << 1),
+    arp_tha_masked        = (25 << 1) + 1,
+    ipv6_src              = (26 << 1),
+    ipv6_src_masked       = (26 << 1) + 1,
+    ipv6_dst              = (27 << 1),
+    ipv6_dst_masked       = (27 << 1) + 1,
+    ipv6_flabel           = (28 << 1),
+    ipv6_flabel_masked    = (28 << 1) + 1,
+    icmpv6_type           = (29 << 1),
+    icmpv6_type_masked    = (29 << 1) + 1,
+    icmpv6_code           = (30 << 1),
+    icmpv6_code_masked    = (30 << 1) + 1,
+    ipv6_nd_target        = (31 << 1),
+    ipv6_nd_target_masked = (31 << 1) + 1,
+    ipv6_nd_sll           = (32 << 1),
+    ipv6_nd_sll_masked    = (32 << 1) + 1,
+    ipv6_nd_tll           = (33 << 1),
+    ipv6_nd_tll_masked    = (33 << 1) + 1,
+    mpls_label            = (34 << 1),
+    mpls_label_masked     = (34 << 1) + 1,
+    mpls_tc               = (35 << 1),
+    mpls_tc_masked        = (35 << 1) + 1
+    # FIXME Add 1.3 oxm elts
+)
+
+def add_oxm_classes_1_2(classes, version):
+    """
+    Add the OXM classes to object passed.  This is a dictionary
+    indexed by class name whose value is an array of member objects.
+    """
+    # First the parent class:
+    if version not in [of_g.VERSION_1_2, of_g.VERSION_1_3]:
+        return
+
+    members = []
+    classes["of_oxm"] = []
+    of_g.ordered_classes[version].append("of_oxm")
+    members.append(dict(name="type_len", m_type="uint32_t"))
+    classes["of_oxm_header"] = members
+    of_g.ordered_classes[version].append("of_oxm_header")
+
+    for oxm in oxm_types:
+        members = []
+        # Assert oxm_types[oxm] in of_base_types
+        m_type = oxm_types[oxm]
+        if m_type in of_g.of_mixed_types:
+            m_type = of_g.of_mixed_types[m_type][version]
+        # m_name = "value_" + of_g.of_base_types[m_type]["short_name"]
+        members.append(dict(name="type_len", m_type="uint32_t"))
+        # members.append(dict(name=m_name, m_type=oxm_types[oxm]))
+        members.append(dict(name="value", m_type=oxm_types[oxm]))
+        if oxm.find("_masked") > 0:
+            members.append(dict(name="value_mask", m_type=oxm_types[oxm]))
+            
+        name = "of_oxm_" + oxm
+        of_g.ordered_classes[version].append(name)
+        classes[name] = members
+        
+# /* Header for OXM experimenter match fields. */
+# struct ofp_oxm_experimenter_header {
+#     uint32_t oxm_header;        /* oxm_class = OFPXMC_EXPERIMENTER */
+#     uint32_t experimenter;      /* Experimenter ID which takes the same
+#                                    form as in struct ofp_experimenter_header. */
+# };
+
+
+# enum ofp_vlan_id {
+#     OFPVID_PRESENT = 0x1000, 
+#     OFPVID_NONE    = 0x0000, 
+# };
+
+# #define OFP_VLAN_NONE      OFPVID_NONE
diff --git a/loxi_front_end/parser.py b/loxi_front_end/parser.py
new file mode 100644
index 0000000..6b0e1eb
--- /dev/null
+++ b/loxi_front_end/parser.py
@@ -0,0 +1,62 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+import pyparsing as P
+
+kw = P.Keyword
+s = P.Suppress
+lit = P.Literal
+
+# Useful for marking the type of a parse result (matches the empty string, but
+# shows up in the result)
+tag = lambda name: P.Empty().setParseAction(P.replaceWith(name))
+
+word = P.Word(P.alphanums + '_')
+
+identifier = word.copy().setName("identifier")
+
+# Type names
+scalar_type = word
+array_type = P.Combine(word + lit('[') - P.Word(P.alphanums + '_') - lit(']'))
+list_type = P.Combine(kw('list') - lit('(') - identifier - lit(')'))
+any_type = (array_type | list_type | scalar_type).setName("type name")
+
+# Structs
+struct_member = P.Group(any_type - identifier - s(';'))
+struct = kw('struct') - identifier - s('{') + \
+         P.Group(P.ZeroOrMore(struct_member)) + \
+         s('}') - s(';')
+
+# Metadata
+metadata_key = P.Or(kw("version")).setName("metadata key")
+metadata = tag('metadata') + s('#') - metadata_key - word
+
+grammar = P.ZeroOrMore(P.Group(struct) | P.Group(metadata))
+grammar.ignore(P.cppStyleComment)
+
+def parse(src):
+    return grammar.parseString(src, parseAll=True)
diff --git a/loxi_front_end/translation.py b/loxi_front_end/translation.py
new file mode 100644
index 0000000..6c39a3a
--- /dev/null
+++ b/loxi_front_end/translation.py
@@ -0,0 +1,126 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief Translation data between openflow.h and LOXI
+#
+
+import re
+import sys
+        
+def loxi_name(ident):
+    """
+    Return the LOXI name of an openflow.h identifier
+    """
+
+    # Order at the outer level matters as super strings are listed first
+    rules = [
+# The following are for #define macros and have highest precedence
+        dict(OFP_MAX_TABLE_NAME_LEN = "OF_MAX_TABLE_NAME_LEN"),
+        dict(OFP_MAX_PORT_NAME_LEN = "OF_MAX_PORT_NAME_LEN"),
+        dict(OFP_TCP_PORT = "OF_TCP_PORT"),
+        dict(OFP_SSL_PORT = "OF_SSL_PORT"),
+        dict(OFP_ETH_ALEN = "OF_ETH_ALEN"),
+        dict(OFP_DEFAULT_MISS_SEND_LEN = "OF_DEFAULT_MISS_SEND_LEN"),
+        dict(OFP_VLAN_NONE = "OF_VLAN_UNTAGGED"),
+        dict(OFP_DL_TYPE_ETH2_CUTOFF = "OF_DL_TYPE_ETH2_CUTOFF"),
+        dict(OFP_DL_TYPE_NOT_ETH_TYPE = "OF_DL_TYPE_NOT_ETH_TYPE"),
+        dict(OFP_FLOW_PERMANENT = "OF_FLOW_PERMANENT"),
+        dict(OFP_DEFAULT_PRIORITY = "OF_DEFAULT_PRIORITY"),
+        dict(DESC_STR_LEN = "OF_DESC_STR_LEN"),
+        dict(SERIAL_NUM_LEN = "OF_SERIAL_NUM_LEN"),
+        dict(OFPQ_ALL = "OF_QUEUE_ALL"),
+        dict(OFPQ_MIN_RATE_UNCFG = "OF_QUEUE_MIN_RATE_UNCFG"),
+        dict(OFPQ_MAX_RATE_UNCFG = "OF_QUEUE_MAX_RATE_UNCFG"),
+        dict(OFP_NO_BUFFER = "OF_BUFFER_ID_NO_BUFFER"),
+
+# These are for enums; they map the prefixes
+        dict(OFPP_MAX = "OF_PORT_NUMBER_MAX"), # Special case
+        dict(OFPP_TABLE = "OF_PORT_DEST_USE_TABLE"), # Special case
+        dict(OFPP_ANY = "OF_PORT_DEST_WILDCARD"), # Special case
+        dict(OFPTC_ = "OF_TABLE_CONFIG_"),
+        dict(OFPIEH_ = "OF_IPV6_EXT_HDR_FLAG_"),
+        dict(OFPMBT_ = "OF_METER_BAND_TYPE_"),
+        dict(OFPMC_ = "OF_METER_MOD_COMMAND_"),
+        dict(OFPMF_ = "OF_METER_FLAG_"),
+        dict(OFPTFFC_ = "OF_TABLE_REQUEST_FAILED_"),
+        dict(OFPMMFC_ = "OF_METER_MOD_FAILED_"),
+        dict(OFPPR_ = "OF_PORT_CHANGE_REASON_"),
+        dict(OFPPMFC_ = "OF_PORT_MOD_FAILED_"),
+        dict(OFPP_ = "OF_PORT_DEST_"),
+        dict(OFPRRFC_ = "OF_ROLE_REQUEST_FAILED_"),
+        dict(OFPRR_ = "OF_FLOW_REMOVED_REASON_"),
+        dict(OFPR_ = "OF_PACKET_IN_REASON_"),
+        dict(OFPC_FRAG_ = "OF_CONFIG_FRAG_"),
+        dict(OFPC_INVALID_ = "OF_CONFIG_INVALID_"),
+        dict(OFPCML_ = "OF_CONTROLLER_PKT_"),
+        dict(OFPCR_ROLE_ = "OF_CONTROLLER_ROLE_"),
+        dict(OFPC_ = "OF_CAPABILITIES_FLAG_"),
+        dict(OFPPC_ = "OF_PORT_CONFIG_FLAG_"),
+        dict(OFPPS_ = "OF_PORT_STATE_FLAG_"),
+        dict(OFPPF_ = "OF_PORT_FEATURE_FLAG_"),
+        dict(OFPTT_ = "OF_TABLE_"),
+        dict(OFPT_ = "OF_OBJ_TYPE_"),
+        dict(OFPMT_ = "OF_MATCH_TYPE_"),
+        dict(OFPM_ = "OF_METER_"),
+        dict(OFPXMC_ = "OF_OXM_CLASS_"),
+        dict(OFPVID_ = "OF_VLAN_TAG_"),
+        dict(OFPGC_ = "OF_GROUP_"),
+        dict(OFPGT_ = "OF_GROUP_TYPE_"),
+        dict(OFPG_ = "OF_GROUP_"),
+        dict(OFPET_ = "OF_ERROR_TYPE_"),
+        dict(OFPFC_ = "OF_FLOW_MOD_COMMAND_"),
+        dict(OFPHFC_ = "OF_HELLO_FAILED_"),
+        dict(OFPBRC_ = "OF_REQUEST_FAILED_"),
+        dict(OFPBAC_ = "OF_ACTION_FAILED_"),
+        dict(OFPBIC_ = "OF_INSTRUCTION_FAILED_"),
+        dict(OFPBMC_ = "OF_MATCH_FAILED_"),
+        dict(OFPGMFC_ = "OF_GROUP_MOD_FAILED_"),
+        dict(OFPTMFC_ = "OF_TABLE_MOD_FAILED_"),
+        dict(OFPFMFC_ = "OF_FLOW_MOD_FAILED_"),
+        dict(OFPQOFC_ = "OF_QUEUE_OP_FAILED_"),
+        dict(OFPSCFC_ = "OF_SWITCH_CONFIG_FAILED_"),
+        dict(OFPQCFC_ = "OF_SWITCH_CONFIG_FAILED_"), # See EXT-208
+        dict(OFPAT_ = "OF_ACTION_TYPE_"),
+        dict(OFPFW_ = "OF_FLOW_WC_V1_"),
+        dict(OFPFF_ = "OF_FLOW_MOD_FLAG_"),
+        dict(OFPST_ = "OF_STATS_TYPE_"),
+        dict(OFPSF_ = "OF_STATS_REPLY_FLAG_"),
+        dict(OFPQT_ = "OF_QUEUE_PROPERTY_"),
+        dict(OFPIT_ = "OF_INSTRUCTION_TYPE_"),
+        dict(OFPGFC_ = "OF_GROUP_CAPABILITIES_"),
+        dict(OFPMP_ = "OF_MULTIPART_"),
+        dict(OFPMPF_ = "OF_MULTIPART_FLAG_"),
+        dict(OFPTFPT_ = "OF_TABLE_FEATURE_"),
+        ]
+
+    for entry in rules:
+        for id_from, id_to in entry.items():
+            if re.match(id_from, ident):
+                return re.sub(id_from, id_to, ident)
+    return None
+
diff --git a/loxi_front_end/type_maps.py b/loxi_front_end/type_maps.py
new file mode 100644
index 0000000..ecfd850
--- /dev/null
+++ b/loxi_front_end/type_maps.py
@@ -0,0 +1,1101 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+#
+# Miscellaneous type information
+#
+# Define the map between sub-class types and wire values.  In each
+# case, an array indexed by wire version gives a hash from identifier
+# to wire value.
+#
+
+import of_g
+import sys
+from generic_utils import *
+import oxm
+import loxi_utils.loxi_utils as loxi_utils
+
+invalid_type = "invalid_type"
+invalid_value = "0xeeee"  # Note, as a string
+
+################################################################
+#
+# Define type data for inheritance classes:
+#   instructions, actions, queue properties and OXM
+#
+# Messages are not in this group; they're treated specially for now
+#
+# These are indexed by wire protocol number
+#
+################################################################
+
+instruction_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        goto_table = 1,
+        write_metadata = 2,
+        write_actions = 3,
+        apply_actions = 4,
+        clear_actions = 5,
+        experimenter = 0xffff
+        ),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(
+        goto_table = 1,
+        write_metadata = 2,
+        write_actions = 3,
+        apply_actions = 4,
+        clear_actions = 5,
+        experimenter = 0xffff
+        ),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        goto_table = 1,
+        write_metadata = 2,
+        write_actions = 3,
+        apply_actions = 4,
+        clear_actions = 5,
+        meter = 6,
+        experimenter = 0xffff
+        )
+    }
+
+of_1_3_action_types = dict(
+    output       = 0,
+    copy_ttl_out = 11,
+    copy_ttl_in  = 12,
+    set_mpls_ttl = 15,
+    dec_mpls_ttl = 16,
+    push_vlan    = 17,
+    pop_vlan     = 18,
+    push_mpls    = 19,
+    pop_mpls     = 20,
+    set_queue    = 21,
+    group        = 22,
+    set_nw_ttl   = 23,
+    dec_nw_ttl   = 24,
+    set_field    = 25,
+    push_pbb     = 26,
+    pop_pbb      = 27,
+    experimenter = 0xffff,
+    bsn_mirror = 0xffff,
+    bsn_set_tunnel_dst = 0xffff,
+    nicira_dec_ttl = 0xffff
+    )
+
+# Indexed by OF version
+action_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(
+        output = 0,
+        set_vlan_vid = 1,
+        set_vlan_pcp = 2,
+        strip_vlan = 3,
+        set_dl_src = 4,
+        set_dl_dst = 5,
+        set_nw_src = 6,
+        set_nw_dst = 7,
+        set_nw_tos = 8,
+        set_tp_src = 9,
+        set_tp_dst = 10,
+        enqueue = 11,
+        experimenter = 0xffff,
+        bsn_mirror = 0xffff,
+        bsn_set_tunnel_dst = 0xffff,
+        nicira_dec_ttl = 0xffff
+        ),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        output = 0,
+        set_vlan_vid = 1,
+        set_vlan_pcp = 2,
+        set_dl_src = 3,
+        set_dl_dst = 4,
+        set_nw_src = 5,
+        set_nw_dst = 6,
+        set_nw_tos = 7,
+        set_nw_ecn = 8,
+        set_tp_src = 9,
+        set_tp_dst = 10,
+        copy_ttl_out = 11,
+        copy_ttl_in = 12,
+        set_mpls_label = 13,
+        set_mpls_tc = 14,
+        set_mpls_ttl = 15,
+        dec_mpls_ttl = 16,
+        push_vlan = 17,
+        pop_vlan = 18,
+        push_mpls = 19,
+        pop_mpls = 20,
+        set_queue = 21,
+        group = 22,
+        set_nw_ttl = 23,
+        dec_nw_ttl = 24,
+        experimenter = 0xffff,
+        bsn_mirror = 0xffff,
+        bsn_set_tunnel_dst = 0xffff,
+        nicira_dec_ttl = 0xffff
+        ),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(
+        output       = 0,
+        copy_ttl_out = 11,
+        copy_ttl_in  = 12,
+        set_mpls_ttl = 15,
+        dec_mpls_ttl = 16,
+        push_vlan    = 17,
+        pop_vlan     = 18,
+        push_mpls    = 19,
+        pop_mpls     = 20,
+        set_queue    = 21,
+        group        = 22,
+        set_nw_ttl   = 23,
+        dec_nw_ttl   = 24,
+        set_field    = 25,
+        experimenter = 0xffff,
+        bsn_mirror = 0xffff,
+        bsn_set_tunnel_dst = 0xffff,
+        nicira_dec_ttl = 0xffff
+        ),
+
+    # version 1.3
+    of_g.VERSION_1_3:of_1_3_action_types
+
+    }
+
+action_id_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+    of_g.VERSION_1_1:dict(),
+    of_g.VERSION_1_2:dict(),
+    of_g.VERSION_1_3:of_1_3_action_types
+    }
+
+queue_prop_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(
+        min_rate      = 1,
+        # experimenter  = 0xffff
+        ),
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        min_rate      = 1,
+        #  experimenter  = 0xffff
+        ),
+    # version 1.2
+    of_g.VERSION_1_2:dict(
+        min_rate      = 1,
+        max_rate      = 2,
+        experimenter  = 0xffff
+        ),
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        min_rate      = 1,
+        max_rate      = 2,
+        experimenter  = 0xffff
+        )
+    }
+
+oxm_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(),
+
+    # version 1.2
+    of_g.VERSION_1_2:oxm.oxm_wire_type,
+
+    # version 1.3
+    of_g.VERSION_1_3:oxm.oxm_wire_type  # FIXME needs update for 1.3?
+    }
+
+hello_elem_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        versionbitmap = 1
+        )
+    }
+
+table_feature_prop_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        instructions           = 0,
+        instructions_miss      = 1,
+        next_tables            = 2,
+        next_tables_miss       = 3,
+        write_actions          = 4,
+        write_actions_miss     = 5,
+        apply_actions          = 6,
+        apply_actions_miss     = 7,
+        match                  = 8,
+        wildcards              = 10,
+        write_setfield         = 12,
+        write_setfield_miss    = 13,
+        apply_setfield         = 14,
+        apply_setfield_miss    = 15,
+#        experimenter           = 0xFFFE,
+#        experimenter_miss      = 0xFFFF,
+        experimenter            = 0xFFFF,  # Wrong: should be experimenter_miss
+        )
+    }
+
+meter_band_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        drop                   = 1,
+        dscp_remark            = 2,
+        experimenter           = 0xFFFF,
+        )
+    }
+
+# All inheritance data for non-messages
+inheritance_data = dict(
+    of_instruction = instruction_types,
+    of_action = action_types,
+    of_action_id = action_id_types,
+    of_oxm = oxm_types,
+    of_queue_prop = queue_prop_types,
+    of_hello_elem = hello_elem_types,
+    of_table_feature_prop = table_feature_prop_types,
+    of_meter_band = meter_band_types
+    )
+
+################################################################
+# Now generate the maps from parent to list of subclasses
+################################################################
+
+# # These lists have entries which are a fixed type, no inheritance
+# fixed_lists = [
+#     "of_list_bucket",
+#     "of_list_bucket_counter",
+#     "of_list_flow_stats_entry",
+#     "of_list_group_desc_stats_entry",
+#     "of_list_group_stats_entry",
+#     "of_list_packet_queue",
+#     "of_list_port_desc",
+#     "of_list_port_stats_entry",
+#     "of_list_queue_stats_entry",
+#     "of_list_table_stats_entry"
+#     ]
+
+# for cls in fixed_lists:
+#     base_type = list_to_entry_type(cls)
+#     of_g.inheritance_map[base_type] = [base_type]
+
+inheritance_map = dict()
+for parent, versioned in inheritance_data.items():
+    inheritance_map[parent] = set()
+    for ver, subclasses in versioned.items():
+        for subcls in subclasses:
+            inheritance_map[parent].add(subcls)
+
+def class_is_virtual(cls):
+    """
+    Returns True if cls is a virtual class
+    """
+    if cls in inheritance_map:
+        return True
+    if cls.find("header") > 0:
+        return True
+    if loxi_utils.class_is_list(cls):
+        return True
+    return False
+
+################################################################
+#
+# These are message types
+#
+################################################################
+
+message_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(
+        hello                   = 0,
+        error_msg               = 1,
+        echo_request            = 2,
+        echo_reply              = 3,
+        experimenter            = 4,
+        features_request        = 5,
+        features_reply          = 6,
+        get_config_request      = 7,
+        get_config_reply        = 8,
+        set_config              = 9,
+        packet_in               = 10,
+        flow_removed            = 11,
+        port_status             = 12,
+        packet_out              = 13,
+        flow_mod                = 14,
+        port_mod                = 15,
+        stats_request           = 16,
+        stats_reply             = 17,
+        barrier_request         = 18,
+        barrier_reply           = 19,
+        queue_get_config_request = 20,
+        queue_get_config_reply  = 21,
+        table_mod               = 22    # Unofficial 1.0 extension
+        ),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        hello                   = 0,
+        error_msg               = 1,
+        echo_request            = 2,
+        echo_reply              = 3,
+        experimenter            = 4,
+        features_request        = 5,
+        features_reply          = 6,
+        get_config_request      = 7,
+        get_config_reply        = 8,
+        set_config              = 9,
+        packet_in               = 10,
+        flow_removed            = 11,
+        port_status             = 12,
+        packet_out              = 13,
+        flow_mod                = 14,
+        group_mod               = 15,
+        port_mod                = 16,
+        table_mod               = 17,
+        stats_request           = 18,
+        stats_reply             = 19,
+        barrier_request         = 20,
+        barrier_reply           = 21,
+        queue_get_config_request = 22,
+        queue_get_config_reply  = 23
+        ),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(
+        hello                   = 0,
+        error_msg               = 1,
+        echo_request            = 2,
+        echo_reply              = 3,
+        experimenter            = 4,
+        features_request        = 5,
+        features_reply          = 6,
+        get_config_request      = 7,
+        get_config_reply        = 8,
+        set_config              = 9,
+        packet_in               = 10,
+        flow_removed            = 11,
+        port_status             = 12,
+        packet_out              = 13,
+        flow_mod                = 14,
+        group_mod               = 15,
+        port_mod                = 16,
+        table_mod               = 17,
+        stats_request           = 18,
+        stats_reply             = 19,
+        barrier_request         = 20,
+        barrier_reply           = 21,
+        queue_get_config_request = 22,
+        queue_get_config_reply   = 23,
+        role_request            = 24,
+        role_reply              = 25,
+        ),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        hello                   = 0,
+        error_msg               = 1,
+        echo_request            = 2,
+        echo_reply              = 3,
+        experimenter            = 4,
+        features_request        = 5,
+        features_reply          = 6,
+        get_config_request      = 7,
+        get_config_reply        = 8,
+        set_config              = 9,
+        packet_in               = 10,
+        flow_removed            = 11,
+        port_status             = 12,
+        packet_out              = 13,
+        flow_mod                = 14,
+        group_mod               = 15,
+        port_mod                = 16,
+        table_mod               = 17,
+        stats_request           = 18,  # FIXME Multipart
+        stats_reply             = 19,
+        barrier_request         = 20,
+        barrier_reply           = 21,
+        queue_get_config_request = 22,
+        queue_get_config_reply   = 23,
+        role_request            = 24,
+        role_reply              = 25,
+        async_get_request       = 26,
+        async_get_reply         = 27,
+        async_set               = 28,
+        meter_mod               = 29
+        )
+    }
+
+################################################################
+#
+# These are other objects that have a notion of type but are
+# not (yet) promoted to objects with inheritance
+#
+################################################################
+
+stats_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(
+        desc = 0,
+        flow = 1,
+        aggregate = 2,
+        table = 3,
+        port = 4,
+        queue = 5,
+        experimenter = 0xffff
+        ),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        desc = 0,
+        flow = 1,
+        aggregate = 2,
+        table = 3,
+        port = 4,
+        queue = 5,
+        group = 6,
+        group_desc = 7,
+        experimenter = 0xffff
+        ),
+
+    # version 1.2
+        of_g.VERSION_1_2:dict(
+        desc = 0,
+        flow = 1,
+        aggregate = 2,
+        table = 3,
+        port = 4,
+        queue = 5,
+        group = 6,
+        group_desc = 7,
+        group_features = 8,
+        experimenter = 0xffff
+        ),
+
+    # version 1.3
+        of_g.VERSION_1_3:dict(
+        desc = 0,
+        flow = 1,
+        aggregate = 2,
+        table = 3,
+        port = 4,
+        queue = 5,
+        group = 6,
+        group_desc = 7,
+        group_features = 8,
+        meter = 9,
+        meter_config = 10,
+        meter_features = 11,
+        table_features = 12,
+        port_desc = 13,
+        experimenter = 0xffff
+        )
+    }
+
+common_flow_mod_types = dict(
+    add = 0,
+    modify = 1,
+    modify_strict = 2,
+    delete = 3,
+    delete_strict = 4
+    )
+
+flow_mod_types = {
+    # version 1.0
+    of_g.VERSION_1_0:common_flow_mod_types,
+    of_g.VERSION_1_1:common_flow_mod_types,
+    of_g.VERSION_1_2:common_flow_mod_types,
+    of_g.VERSION_1_3:common_flow_mod_types
+    }
+
+# These do not translate to objects (yet)
+error_types = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(
+        hello_failed        = 0,
+        bad_request         = 1,
+        bad_action          = 2,
+        flow_mod_failed     = 3,
+        port_mod_failed     = 4,
+        queue_op_failed     = 5
+        ),
+
+    # version 1.1
+    of_g.VERSION_1_1:dict(
+        hello_failed         = 0,
+        bad_request          = 1,
+        bad_action           = 2,
+        bad_instruction      = 3,
+        bad_match            = 4,
+        flow_mod_failed      = 5,
+        group_mod_failed     = 6,
+        port_mod_failed      = 7,
+        table_mod_failed     = 8,
+        queue_op_failed      = 9,
+        switch_config_failed = 10
+        ),
+
+    # version 1.2
+    of_g.VERSION_1_2:dict(
+        hello_failed         = 0,
+        bad_request          = 1,
+        bad_action           = 2,
+        bad_instruction      = 3,
+        bad_match            = 4,
+        flow_mod_failed      = 5,
+        group_mod_failed     = 6,
+        port_mod_failed      = 7,
+        table_mod_failed     = 8,
+        queue_op_failed      = 9,
+        switch_config_failed = 10,
+        role_request_failed  = 11,
+        experimenter = 0xffff
+        ),
+
+    # version 1.3
+    of_g.VERSION_1_3:dict(
+        hello_failed         = 0,
+        bad_request          = 1,
+        bad_action           = 2,
+        bad_instruction      = 3,
+        bad_match            = 4,
+        flow_mod_failed      = 5,
+        group_mod_failed     = 6,
+        port_mod_failed      = 7,
+        table_mod_failed     = 8,
+        queue_op_failed      = 9,
+        switch_config_failed = 10,
+        role_request_failed  = 11,
+        meter_mod_failed     = 12,
+        table_features_failed= 13,
+        experimenter = 0xffff
+        )
+    }
+
+##
+# These are the objects whose length is specified by an external
+# reference, specifically another data member in the class.
+# 
+#external_length_spec = {
+#    ("of_packet_out", "actions", OF_VERSION_1_0) : "actions_len",
+#    ("of_packet_out", "actions", OF_VERSION_1_1) : "actions_len",
+#    ("of_packet_out", "actions", OF_VERSION_1_2) : "actions_len",
+#    ("of_packet_out", "actions", OF_VERSION_1_3) : "actions_len"
+#}
+
+
+################################################################
+#
+# type_val is the primary data structure that maps an 
+# (class_name, version) pair to the wire data type value
+#
+################################################################
+
+type_val = dict()
+
+for version, classes in message_types.items():
+    for cls in classes:
+        name = "of_" + cls
+        type_val[(name, version)] = classes[cls]
+
+for parent, versioned in inheritance_data.items():
+    for version, subclasses in versioned.items():
+        for subcls, value in subclasses.items():
+            name = parent + "_" + subcls
+            type_val[(name, version)] = value
+
+# Special case OF-1.2 match type
+type_val[("of_match_v3", of_g.VERSION_1_2)] = 0x8000
+type_val[("of_match_v3", of_g.VERSION_1_3)] = 0x8000
+
+# Utility function
+def dict_to_array(d, m_val, def_val=-1):
+    """
+    Given a dictionary, d, with each value a small integer,
+    produce an array indexed by the integer whose value is the key.
+    @param d The dictionary
+    @param m_val Ignore values greater than m_val
+    @param def_val The default value (for indices not in range of d)
+    """
+
+    # Get the max value in range for hash
+    max_val = 0
+    for key in d:
+        if (d[key] > max_val) and (d[key] < m_val):
+            max_val = d[key]
+    ar = []
+    for x in range(0, max_val + 1):
+        ar.append(def_val)
+    for key in d:
+        if (d[key] < m_val):
+            ar[d[key]] = key
+    return ar
+
+def type_array_len(version_indexed, max_val):
+    """
+    Given versioned information about a type, calculate how long
+    the unified array should be.
+
+    @param version_indexed A dict indexed by version. Each value is a 
+    dict indexed by a name and whose value is an integer
+    @param max_val Ignore values greater than this for length calcs
+    """
+    # First, find the max length of all arrays
+    arr_len = 0
+    for version, val_dict in version_indexed.items():
+        ar = dict_to_array(val_dict, max_val, invalid_type)
+        if arr_len < len(ar):
+            arr_len = len(ar)
+    return arr_len
+
+# FIXME:  Need to move update for multipart messages
+
+stats_reply_list = [
+    "of_aggregate_stats_reply",
+    "of_desc_stats_reply",
+    "of_experimenter_stats_reply",
+    "of_flow_stats_reply",
+    "of_group_stats_reply",
+    "of_group_desc_stats_reply",
+    "of_group_features_stats_reply",
+    "of_meter_stats_reply",
+    "of_meter_config_stats_reply",
+    "of_meter_features_stats_reply",
+    "of_port_stats_reply",
+    "of_port_desc_stats_reply",
+    "of_queue_stats_reply",
+    "of_table_stats_reply",
+    "of_table_features_stats_reply"
+]
+
+stats_request_list = [
+    "of_aggregate_stats_request",
+    "of_desc_stats_request",
+    "of_experimenter_stats_request",
+    "of_flow_stats_request",
+    "of_group_stats_request",
+    "of_group_desc_stats_request",
+    "of_group_features_stats_request",
+    "of_meter_stats_request",
+    "of_meter_config_stats_request",
+    "of_meter_features_stats_request",
+    "of_port_stats_request",
+    "of_port_desc_stats_request",
+    "of_queue_stats_request",
+    "of_table_stats_request",
+    "of_table_features_stats_request"
+]
+
+flow_mod_list = [
+    "of_flow_add",
+    "of_flow_modify",
+    "of_flow_modify_strict",
+    "of_flow_delete",
+    "of_flow_delete_strict"
+]
+
+def sub_class_map(base_type, version):
+    """
+    Returns an iterable object giving the instance nameys and subclass types
+    for the base_type, version values
+    """
+    rv = []
+    if base_type not in inheritance_map:
+        return rv
+
+    for instance in inheritance_map[base_type]:
+        subcls = loxi_utils.instance_to_class(instance, base_type)
+        if not loxi_utils.class_in_version(subcls, version):
+            continue
+        rv.append((instance, subcls))
+
+    return rv
+
+################################################################
+#
+# Extension related data and functions
+#
+################################################################
+
+# Per OF Version, per experimenter, map exp msg type (subtype) to object IDs
+# @fixme Generate defines for OF_<exp>_SUBTYPE_<msg> for the values below?
+extension_message_subtype = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(  # Version 1.0 extensions
+        bsn = {   # BSN extensions; indexed by class name, value is subtype
+            "of_bsn_set_ip_mask"             : 0,
+            "of_bsn_get_ip_mask_request"     : 1,
+            "of_bsn_get_ip_mask_reply"       : 2,
+            "of_bsn_set_mirroring"           : 3,
+            "of_bsn_get_mirroring_request"   : 4,
+            "of_bsn_get_mirroring_reply"     : 5,
+            "of_bsn_shell_command"           : 6,
+            "of_bsn_shell_output"            : 7,
+            "of_bsn_shell_status"            : 8,
+            "of_bsn_get_interfaces_request"  : 9,
+            "of_bsn_get_interfaces_reply"    : 10,
+            },
+        nicira = {   # Nicira extensions, value is subtype
+            "of_nicira_controller_role_request"      : 10,
+            "of_nicira_controller_role_reply"        : 11,
+            },
+        ),
+    of_g.VERSION_1_1:dict(  # Version 1.0 extensions
+        bsn = {   # BSN extensions; indexed by class name, value is subtype
+            "of_bsn_set_mirroring"           : 3,
+            "of_bsn_get_mirroring_request"   : 4,
+            "of_bsn_get_mirroring_reply"     : 5,
+            "of_bsn_get_interfaces_request"  : 9,
+            "of_bsn_get_interfaces_reply"    : 10,
+            },
+        ),
+    of_g.VERSION_1_2:dict(  # Version 1.0 extensions
+        bsn = {   # BSN extensions; indexed by class name, value is subtype
+            "of_bsn_set_mirroring"           : 3,
+            "of_bsn_get_mirroring_request"   : 4,
+            "of_bsn_get_mirroring_reply"     : 5,
+            "of_bsn_get_interfaces_request"  : 9,
+            "of_bsn_get_interfaces_reply"    : 10,
+            },
+        ),
+    of_g.VERSION_1_3:dict(  # Version 1.0 extensions
+        bsn = {   # BSN extensions; indexed by class name, value is subtype
+            "of_bsn_set_mirroring"           : 3,
+            "of_bsn_get_mirroring_request"   : 4,
+            "of_bsn_get_mirroring_reply"     : 5,
+            "of_bsn_get_interfaces_request"  : 9,
+            "of_bsn_get_interfaces_reply"    : 10,
+            },
+        ),
+}
+
+# Set to empty dict if no extension actions defined
+# Per OF Version, per experimenter, map actions to subtype
+extension_action_subtype = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(  # Version 1.0 extensions
+        bsn = {   # of_action_bsn_
+            "of_action_bsn_mirror"           : 1,
+            "of_action_bsn_set_tunnel_dst"   : 2,
+            },
+        nicira = {   # of_action_nicira_
+            "of_action_nicira_dec_ttl"       : 18,
+            }
+        ),
+    of_g.VERSION_1_1:dict(  # Version 1.0 extensions
+        bsn = {   # of_action_bsn_
+            "of_action_bsn_mirror"           : 1,
+            "of_action_bsn_set_tunnel_dst"   : 2,
+            },
+        nicira = {   # of_action_nicira_
+            "of_action_nicira_dec_ttl"       : 18,
+            }
+        ),
+    of_g.VERSION_1_2:dict(  # Version 1.0 extensions
+        bsn = {   # of_action_bsn_
+            "of_action_bsn_mirror"           : 1,
+            "of_action_bsn_set_tunnel_dst"   : 2,
+            },
+        nicira = {   # of_action_nicira_
+            "of_action_nicira_dec_ttl"       : 18,
+            }
+        ),
+    of_g.VERSION_1_3:dict(  # Version 1.0 extensions
+        bsn = {   # of_action_bsn_
+            "of_action_bsn_mirror"           : 1,
+            "of_action_bsn_set_tunnel_dst"   : 2,
+            },
+        nicira = {   # of_action_nicira_
+            "of_action_nicira_dec_ttl"       : 18,
+            }
+        ),
+}
+
+# Set to empty dict if no extension actions defined
+# Per OF Version, per experimenter, map actions to subtype
+extension_action_id_subtype = {
+    # version 1.0
+    of_g.VERSION_1_0:dict(),
+    of_g.VERSION_1_1:dict(),
+    of_g.VERSION_1_2:dict(),
+    of_g.VERSION_1_3:dict(  # Version 1.3 extensions
+        bsn = {   # of_action_bsn_
+            "of_action_id_bsn_mirror"           : 1,
+            "of_action_id_bsn_set_tunnel_dst"   : 2,
+            },
+        nicira = {   # of_action_nicira_
+            "of_action_id_nicira_dec_ttl"       : 18,
+            }
+        ),
+}
+
+# Set to empty dict if no extension instructions defined
+extension_instruction_subtype = {}
+
+# Set to empty dict if no extension instructions defined
+extension_queue_prop_subtype = {}
+
+# Set to empty dict if no extension instructions defined
+extension_table_feature_prop_subtype = {}
+
+extension_objects = [
+    extension_message_subtype,
+    extension_action_subtype,
+    extension_action_id_subtype,
+    extension_instruction_subtype,
+    extension_queue_prop_subtype,
+    extension_table_feature_prop_subtype
+]
+
+################################################################
+# These are extension type generic (for messages, actions...)
+################################################################
+
+def extension_to_experimenter_name(cls):
+    """
+    Return the name of the experimenter if class is an
+    extension, else None
+
+    This is brute force; we search all extension data for a match
+    """
+    
+    for ext_obj in extension_objects:
+        for version, exp_list in ext_obj.items():
+            for exp_name, classes in exp_list.items():
+                if cls in classes:
+                    return exp_name
+    return None
+
+def extension_to_experimenter_id(cls):
+    """
+    Return the ID of the experimenter if class is an
+    extension, else None
+    """
+    exp_name = extension_to_experimenter_name(cls)
+    if exp_name:
+        return of_g.experimenter_name_to_id[exp_name]
+    return None
+
+def extension_to_experimenter_macro_name(cls):
+    """
+    Return the "macro name" of the ID of the experimenter if class is an
+    extension, else None
+    """
+    exp_name = extension_to_experimenter_name(cls)
+    if exp_name:
+        return "OF_EXPERIMENTER_ID_" + exp_name.upper()
+    return None
+
+def extension_to_subtype(cls, version):
+    """
+    Generic across all extension objects, return subtype identifier
+    """
+    for ext_obj in extension_objects:
+        for version, exp_list in ext_obj.items():
+            for exp_name, classes in exp_list.items():
+                if cls in classes:
+                    return classes[cls]
+
+def class_is_extension(cls, version):
+    """
+    Return True if class, version is recognized as an extension
+    of any type (message, action....)
+
+    Accepts of_g.OF_VERSION_ANY
+    """
+
+    for ext_obj in extension_objects:
+        if cls_is_ext_obj(cls, version, ext_obj):
+            return True
+
+    return False
+
+# Internal
+def cls_is_ext_obj(cls, version, ext_obj):
+    """
+    @brief Return True if cls in an extension of type ext_obj
+    @param cls The class to check
+    @param version The version to check
+    @param ext_obj The extension object dictionary (messages, actions...)
+
+    Accepts of_g.VERSION_ANY
+    """
+
+    # Call with each version if "any" is passed
+    if version == of_g.VERSION_ANY:
+        for v in of_g.of_version_range:
+            if cls_is_ext_obj(cls, v, ext_obj):
+                return True
+    else:   # Version specified
+        if version in ext_obj:
+            for exp, subtype_vals in ext_obj[version].items():
+                if cls in subtype_vals:
+                    return True
+
+    return False
+    
+################################################################
+# These are extension message specific
+################################################################
+
+def message_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an  extension
+    This is brute force, searching records for a match
+    """
+    return cls_is_ext_obj(cls, version, extension_message_subtype)
+
+def extension_message_to_subtype(cls, version):
+    """
+    Return the subtype of the experimenter message if the class is an
+    extension, else None
+    """
+    if version in extension_message_subtype:
+        for exp, classes in extension_message_subtype[version].items():
+            for ext_class, subtype in classes.items():
+                if cls == ext_class:
+                    return subtype
+    return None
+
+################################################################
+# These are extension action specific
+################################################################
+
+def action_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an action extension
+    This is brute force, searching records for a match
+    """
+    return cls_is_ext_obj(cls, version, extension_action_subtype)
+
+def extension_action_to_subtype(cls, version):
+    """
+    Return the ID of the action subtype (for its experimenteer)
+    if class is an action extension, else None
+    """
+    if version in extension_action_subtype:
+        for exp, classes in extension_action_subtype[version].items():
+            if cls in classes:
+                return classes[cls]
+
+    return None
+
+################################################################
+# These are extension action specific
+################################################################
+
+def action_id_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an action ID extension
+    This is brute force, searching records for a match
+    """
+    if version not in [of_g.VERSION_1_3]: # Action IDs only 1.3
+        return False
+    return cls_is_ext_obj(cls, version, extension_action_id_subtype)
+
+def extension_action_id_to_subtype(cls, version):
+    """
+    Return the ID of the action ID subtype (for its experimenteer)
+    if class is an action ID extension, else None
+    """
+    if version in extension_action_id_subtype:
+        for exp, classes in extension_action_id_subtype[version].items():
+            if cls in classes:
+                return classes[cls]
+
+    return None
+
+################################################################
+# These are extension instruction specific
+################################################################
+
+def instruction_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an instruction extension
+    This is brute force, searching records for a match
+    """
+    return cls_is_ext_obj(cls, version, extension_instruction_subtype)
+
+################################################################
+# These are extension queue_prop specific
+################################################################
+
+def queue_prop_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an instruction extension
+    This is brute force, searching records for a match
+    """
+    return cls_is_ext_obj(cls, version, extension_queue_prop_subtype)
+
+################################################################
+# These are extension table_feature_prop specific
+################################################################
+
+def table_feature_prop_is_extension(cls, version):
+    """
+    Return True if cls, version is recognized as an instruction extension
+    This is brute force, searching records for a match
+    """
+    return cls_is_ext_obj(cls, version,
+                          extension_table_feature_prop_subtype)
diff --git a/loxi_utils/__init__.py b/loxi_utils/__init__.py
new file mode 100644
index 0000000..5e4e379
--- /dev/null
+++ b/loxi_utils/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
diff --git a/loxi_utils/loxi_utils.py b/loxi_utils/loxi_utils.py
new file mode 100644
index 0000000..cc3e635
--- /dev/null
+++ b/loxi_utils/loxi_utils.py
@@ -0,0 +1,520 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief Utilities involving LOXI naming conventions
+
+Utility functions for OpenFlow class generation 
+
+These may need to be sorted out into language specific functions
+"""
+
+import of_g
+import tenjin
+
+def class_signature(members):
+    """
+    Generate a signature string for a class in canonical form
+
+    @param cls The class whose signature is to be generated
+    """
+    return ";".join([",".join([x["m_type"], x["name"], str(x["offset"])])
+                     for x in members])
+
+def type_dec_to_count_base(m_type):
+    """
+    Resolve a type declaration like uint8_t[4] to a count (4) and base_type
+    (uint8_t)
+
+    @param m_type The string type declaration to process
+    """
+    count = 1
+    chk_ar = m_type.split('[')
+    if len(chk_ar) > 1:
+        count_str = chk_ar[1].split(']')[0]
+        if count_str in of_g.ofp_constants:
+            count = of_g.ofp_constants[count_str]
+        else:
+            count = int(count_str)
+        base_type = chk_ar[0]
+    else:
+        base_type = m_type
+    return count, base_type
+
+##
+# Class types:
+#
+# Virtual
+#    A virtual class is one which does not have an explicit wire
+#    representation.  For example, an inheritance super class
+#    or a list type.
+#
+# List
+#    A list of objects of some other type
+#
+# TLV16
+#    The wire represenation starts with 16-bit type and length fields
+#
+# OXM
+#    An extensible match object
+#
+# Message
+#    A top level OpenFlow message
+#
+#
+
+def class_is_message(cls):
+    """
+    Return True if cls is a message object based on info in unified
+    """
+    return "xid" in of_g.unified[cls]["union"] and cls != "of_header"
+
+def class_is_tlv16(cls):
+    """
+    Return True if cls_name is an object which uses uint16 for type and length
+    """
+    if cls.find("of_action") == 0: # Includes of_action_id classes
+        return True
+    if cls.find("of_instruction") == 0:
+        return True
+    if cls.find("of_queue_prop") == 0:
+        return True
+    if cls.find("of_table_feature_prop") == 0:
+        return True
+    # *sigh*
+    if cls.find("of_meter_band_stats") == 0:  # NOT A TLV
+        return False
+    if cls.find("of_meter_band") == 0:
+        return True
+    if cls.find("of_hello_elem") == 0:
+        return True
+    if cls == "of_match_v3":
+        return True
+    if cls == "of_match_v4":
+        return True
+    return False
+
+def class_is_u16_len(cls):
+    """
+    Return True if cls_name is an object which uses initial uint16 length
+    """
+    return cls in ["of_group_desc_stats_entry", "of_group_stats_entry",
+                   "of_flow_stats_entry", "of_bucket", "of_table_features"]
+
+def class_is_oxm(cls):
+    """
+    Return True if cls_name is an OXM object
+    """
+    if cls.find("of_oxm") == 0:
+        return True
+    return False
+
+def class_is_action(cls):
+    """
+    Return True if cls_name is an action object
+
+    Note that action_id is not an action object, though it has
+    the same header.  It looks like an action header, but the type
+    is used to identify a kind of action, it does not indicate the
+    type of the object following.
+    """
+    if cls.find("of_action_id") == 0:
+        return False
+    if cls.find("of_action") == 0:
+        return True
+
+    # For each vendor, check for vendor specific action
+    for exp in of_g.experimenter_name_to_id:
+        if cls.find("of_action" + exp) == 0:
+            return True
+
+    return False
+
+def class_is_action_id(cls):
+    """
+    Return True if cls_name is an action object
+
+    Note that action_id is not an action object, though it has
+    the same header.  It looks like an action header, but the type
+    is used to identify a kind of action, it does not indicate the
+    type of the object following.
+    """
+    if cls.find("of_action_id") == 0:
+        return True
+
+    # For each vendor, check for vendor specific action
+    for exp in of_g.experimenter_name_to_id:
+        if cls.find("of_action_id_" + exp) == 0:
+            return True
+
+    return False
+
+def class_is_instruction(cls):
+    """
+    Return True if cls_name is an instruction object
+    """
+    if cls.find("of_instruction") == 0:
+        return True
+
+    # For each vendor, check for vendor specific action
+    for exp in of_g.experimenter_name_to_id:
+        if cls.find("of_instruction_" + exp) == 0:
+            return True
+
+    return False
+
+def class_is_meter_band(cls):
+    """
+    Return True if cls_name is an instruction object
+    """
+    # meter_band_stats is not a member of meter_band class hierarchy
+    if cls.find("of_meter_band_stats") == 0:
+        return False
+    if cls.find("of_meter_band") == 0:
+        return True
+    return False
+
+def class_is_hello_elem(cls):
+    """
+    Return True if cls_name is an instruction object
+    """
+    if cls.find("of_hello_elem") == 0:
+        return True
+    return False
+
+def class_is_queue_prop(cls):
+    """
+    Return True if cls_name is a queue_prop object
+    """
+    if cls.find("of_queue_prop") == 0:
+        return True
+
+    # For each vendor, check for vendor specific action
+    for exp in of_g.experimenter_name_to_id:
+        if cls.find("of_queue_prop_" + exp) == 0:
+            return True
+
+    return False
+
+def class_is_table_feature_prop(cls):
+    """
+    Return True if cls_name is a queue_prop object
+    """
+    if cls.find("of_table_feature_prop") == 0:
+        return True
+    return False
+
+def class_is_stats_message(cls):
+    """
+    Return True if cls_name is a message object based on info in unified
+    """
+
+    return "stats_type" in of_g.unified[cls]["union"]
+
+def class_is_list(cls):
+    """
+    Return True if cls_name is a list object
+    """
+    return (cls.find("of_list_") == 0)
+
+def type_is_of_object(m_type):
+    """
+    Return True if m_type is an OF object type
+    """
+    # Remove _t from the type id and see if key for unified class
+    if m_type[-2:] == "_t":
+        m_type = m_type[:-2]
+    return m_type in of_g.unified
+
+def list_to_entry_type(cls):
+    """
+    Return the entry type for a list
+    """
+    slen = len("of_list_")
+    return "of_" + cls[slen:] 
+
+def type_to_short_name(m_type):
+    if m_type in of_g.of_base_types:
+        tname = of_g.of_base_types[m_type]["short_name"]
+    elif m_type in of_g.of_mixed_types:
+        tname = of_g.of_mixed_types[m_type]["short_name"]
+    else:
+        tname = "unknown"
+    return tname
+
+def type_to_name_type(cls, member_name):
+    """
+    Generate the root name of a member for accessor functions, etc
+    @param cls The class name
+    @param member_name The member name
+    """
+    members = of_g.unified[cls]["union"]
+    if not member_name in members:
+        debug("Error:  %s is not in class %s for acc_name defn" %
+              (member_name, cls))
+        os.exit()
+
+    mem = members[member_name]
+    m_type = mem["m_type"]
+    id = mem["memid"]
+    tname = type_to_short_name(m_type)
+
+    return "o%d_m%d_%s" % (of_g.unified[cls]["object_id"], id, tname)
+
+
+def member_to_index(m_name, members):
+    """
+    Given a member name, return the index in the members dict
+    @param m_name The name of the data member to search for
+    @param members The dict of members
+    @return Index if found, -1 not found
+
+    Note we could generate an index when processing the original input
+    """
+    count = 0
+    for d in members:
+        if d["name"] == m_name:
+            return count
+        count += 1
+    return -1
+
+def member_base_type(cls, m_name):
+    """
+    Map a member to its of_ type
+    @param cls The class name
+    @param m_name The name of the member being gotten
+    @return The of_ type of the member
+    """
+    rv = of_g.unified[cls]["union"][m_name]["m_type"]
+    if rv[-2:] == "_t":
+        return rv
+    return rv + "_t"
+
+def member_type_is_octets(cls, m_name):
+    return member_base_type(cls, m_name) == "of_octets_t"
+
+def member_returns_val(cls, m_name):
+    """
+    Should get accessor return a value rather than void
+    @param cls The class name
+    @param m_name The member name
+    @return True if of_g config and the specific member allow a 
+    return value.  Otherwise False
+    """
+    m_type = of_g.unified[cls]["union"][m_name]["m_type"]
+    return (config_check("get_returns") =="value" and 
+            m_type in of_g.of_scalar_types)
+
+def config_check(str, dictionary = of_g.code_gen_config):
+    """
+    Return config value if in dictionary; else return False.
+    @param str The lookup index
+    @param dictionary The dict to check; use code_gen_config if None
+    """
+
+    if str in dictionary:
+        return dictionary[str]
+
+    return False
+
+def h_file_to_define(name):
+    """
+    Convert a .h file name to the define used for the header
+    """
+    h_name = name[:-2].upper()
+    h_name = "_" + h_name + "_H_"
+    return h_name
+
+def type_to_cof_type(m_type):
+    if m_type in of_g.of_base_types:
+        if "cof_type" in of_g.of_base_types[m_type]:
+            return of_g.of_base_types[m_type]["cof_type"]
+    return m_type
+
+            
+def member_is_scalar(cls, m_name):
+    return of_g.unified[cls]["union"][m_name]["m_type"] in of_g.of_scalar_types
+
+def type_is_scalar(m_type):
+    return m_type in of_g.of_scalar_types
+
+def skip_member_name(name):
+    return name.find("pad") == 0 or name in of_g.skip_members
+
+def enum_name(cls):
+    """
+    Return the name used for an enum identifier for the given class
+    @param cls The class name
+    """
+    return cls.upper()
+
+def class_in_version(cls, ver):
+    """
+    Return boolean indicating if cls is defined for wire version ver
+    """
+
+    return (cls, ver) in of_g.base_length
+
+def instance_to_class(instance, parent):
+    """
+    Return the name of the class for an instance of inheritance type parent
+    """
+    return parent + "_" + instance
+
+def sub_class_to_var_name(cls):
+    """
+    Given a subclass name like of_action_output, generate the
+    name of a variable like 'output'
+    @param cls The class name
+    """
+    pass
+
+def class_is_var_len(cls, version):
+    # Match is special case.  Only version 1.2 (wire version 3) is var
+    if cls == "of_match":
+        return version == 3
+
+    return not (cls, version) in of_g.is_fixed_length
+
+def base_type_to_length(base_type, version):
+    if base_type + "_t" in of_g.of_base_types:
+        inst_len = of_g.of_base_types[base_type + "_t"]["bytes"]
+    else:
+        inst_len = of_g.base_length[(base_type, version)]
+
+def version_to_name(version):
+    """
+    Convert an integer version to the C macro name
+    """
+    return "OF_" + of_g.version_names[version]
+
+##
+# Is class a flow modify of some sort?
+
+def cls_is_flow_mod(cls):
+    return cls in ["of_flow_modify", "of_flow_add", "of_flow_delete",
+                   "of_flow_modify_strict", "of_flow_delete_strict"]
+
+
+def all_member_types_get(cls, version):
+    """
+    Get the members and list of types for members of a given class
+    @param cls The class name to process
+    @param version The version for the class
+    """
+    member_types = []
+
+    if not version in of_g.unified[cls]:
+        return ([], [])
+
+    if "use_version" in of_g.unified[cls][version]:
+        v = of_g.unified[cls][version]["use_version"]
+        members = of_g.unified[cls][v]["members"]
+    else:
+        members = of_g.unified[cls][version]["members"]
+    # Accumulate variables that are supported
+    for member in members:
+        m_type = member["m_type"]
+        m_name = member["name"]
+        if skip_member_name(m_name):
+            continue
+        if not m_type in member_types:
+            member_types.append(m_type)
+
+    return (members, member_types)
+
+def list_name_extract(list_type):
+    """
+    Return the base name for a list object of the given type
+    @param list_type The type of the list as appears in the input,
+    for example list(of_port_desc_t).
+    @return A pair, (list-name, base-type) where list-name is the
+    base name for the list, for example of_list_port_desc, and base-type
+    is the type of list elements like of_port_desc_t
+    """
+    base_type = list_type[5:-1]
+    list_name = base_type
+    if list_name.find("of_") == 0:
+        list_name = list_name[3:]
+    if list_name[-2:] == "_t":
+        list_name = list_name[:-2]
+    list_name = "of_list_" + list_name
+    return (list_name, base_type)
+
+def version_to_name(version):
+    """
+    Convert an integer version to the C macro name
+    """
+    return "OF_" + of_g.version_names[version]
+
+def gen_c_copy_license(out):
+    """
+    Generate the top comments for copyright and license
+    """
+    out.write("""\
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford Junior University */
+/* Copyright (c) 2011, 2012 Open Networking Foundation */
+/* Copyright (c) 2012, 2013 Big Switch Networks, Inc. */
+
+""")
+
+def accessor_returns_error(a_type, m_type):
+    is_var_len = (not type_is_scalar(m_type)) and \
+        [x for x in of_g.of_version_range if class_is_var_len(m_type[:-2], x)] != []
+    if a_type == "set" and is_var_len:
+        return True
+    elif m_type == "of_match_t":
+        return True
+    else:
+        return False
+
+def render_template(out, name, path, context):
+    """
+    Render a template using tenjin.
+    out: a file-like object
+    name: name of the template
+    path: array of directories to search for the template
+    context: dictionary of variables to pass to the template
+    """
+    pp = [ tenjin.PrefixedLinePreprocessor() ] # support "::" syntax
+    template_globals = { "to_str": str, "escape": str } # disable HTML escaping
+    engine = tenjin.Engine(path=path, pp=pp)
+    out.write(engine.render(name, context, template_globals))
+
+def render_static(out, name, path):
+    """
+    Write out a static template.
+    out: a file-like object
+    name: name of the template
+    path: array of directories to search for the template
+    """
+    # Reuse the tenjin logic for finding the template
+    template_filename = tenjin.FileSystemLoader().find(name, path)
+    if not template_filename:
+        raise ValueError("template %s not found" % name)
+    with open(template_filename) as infile:
+        out.write(infile.read())
diff --git a/loxi_utils/py_utils.py b/loxi_utils/py_utils.py
new file mode 100644
index 0000000..aaa4ab1
--- /dev/null
+++ b/loxi_utils/py_utils.py
@@ -0,0 +1,40 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+import types
+
+class DotDict(dict):
+    """ Access keys in a nested dictionary using dot notation """
+
+    def __getattr__(self, attr):
+        item = self.get(attr, None)
+        if type(item) == types.DictType:
+            item = DotDict(item)
+        return item
+
+    __setattr__= dict.__setitem__
+    __delattr__= dict.__delitem__
diff --git a/loxigen.py b/loxigen.py
new file mode 100755
index 0000000..d72e9e2
--- /dev/null
+++ b/loxigen.py
@@ -0,0 +1,600 @@
+#!/usr/bin/python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+@brief
+Process openflow header files to create language specific LOXI interfaces
+
+First cut at simple python script for processing input files
+
+Internal notes
+
+An input file for each supported OpenFlow version is passed in
+on the command line.
+
+Expected input file format:
+
+These will probably be collapsed into a python dict or something
+
+The first line has the ofC version identifier and nothing else
+The second line has the openflow wire protocol value and nothing else
+
+The main content is struct elements for each OF recognized class.
+These are taken from current versions of openflow.h but are modified
+a bit.  See Overview for more information.
+
+Class canonical form:   A list of entries, each of which is a 
+pair "type, name;".  The exception is when type is the keyword
+'list' in which the syntax is "list(type) name;".
+
+From this, internal representations are generated:  For each
+version, a dict indexed by class name.  One element (members) is
+an array giving the member name and type.  From this, wire offsets
+can be calculated.
+
+
+@fixme Clean up the lang module architecture.  It should provide a
+list of files that it wants to generate and maps to the filenames,
+subdirectory names and generation functions.  It should also be 
+defined as a class, probably with the constructor taking the 
+language target.
+
+@fixme Clean up global data structures such as versions and of_g
+structures.  They should probably be a class or classes as well.
+
+"""
+
+import sys
+
+import re
+import string
+import os
+import glob
+import copy
+import of_g
+import loxi_front_end.oxm as oxm
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as loxi_utils
+import loxi_front_end.of_h_utils as of_h_utils
+import loxi_front_end.c_parse_utils as c_parse_utils
+import loxi_front_end.identifiers as identifiers
+import pyparsing
+import loxi_front_end.parser as parser
+
+from generic_utils import *
+
+root_dir = os.path.dirname(os.path.realpath(__file__))
+
+# TODO:  Put these in a class so they get documented
+
+## Dict indexed by version giving all info related to version
+#
+# This is local; after processing, the information is stored in
+# of_g variables.
+versions = {}
+
+def config_sanity_check():
+    """
+    Check the configuration for basic consistency
+
+    @fixme Needs update for generic language support
+    """
+
+    rv = True
+    # For now, only "error" supported for get returns
+    if config_check("copy_semantics") != "read":
+        debug("Only 'read' is supported for copy_semantics");
+        rv = False        
+    if config_check("get_returns") != "error":
+        debug("Only 'error' is supported for get-accessor return types\m");
+        rv = False        
+    if not config_check("use_fn_ptrs") and not config_check("gen_unified_fns"):
+        debug("Must have gen_fn_ptrs and/or gen_unified_fns set in config")
+        rv = False
+    if config_check("use_obj_id"):
+        debug("use_obj_id is set but not yet supported (change \
+config_sanity_check if it is)")
+        rv = False
+    if config_check("gen_unified_macros") and config_check("gen_unified_fns") \
+            and config_check("gen_unified_macro_lower"):
+        debug("Conflict: Cannot generate unified functions and lower case \
+unified macros")
+        rv = False
+        
+    return rv
+
+def add_class(wire_version, cls, members):
+    """
+    Process a class for the given version and update the unified 
+    list of classes as needed.
+
+    @param wire_version The wire version for this class defn
+    @param cls The name of the class being added
+    @param members The list of members with offsets calculated
+    """
+    memid = 0
+
+    sig = loxi_utils.class_signature(members)
+    if cls in of_g.unified:
+        uc = of_g.unified[cls]
+        if wire_version in uc:
+            debug("Error adding %s to unified. Wire ver %d exists" %
+                  (cls, wire_version))
+            sys.exit(1)
+        uc[wire_version] = {}
+        # Check for a matching signature
+        for wver in uc:
+            if type(wver) != type(0): continue
+            if wver == wire_version: continue
+            if not "use_version" in uc[wver]:
+                if sig == loxi_utils.class_signature(uc[wver]["members"]):
+                    log("Matched %s, ver %d to ver %d" % 
+                          (cls, wire_version, wver))
+                    # have a match with existing version
+                    uc[wire_version]["use_version"] = wver
+                    # What else to do?
+                    return
+    else:  # Haven't seen this entry before
+        log("Adding %s to unified list, ver %d" % (cls, wire_version))
+        of_g.unified[cls] = dict(union={})
+        uc = of_g.unified[cls]
+
+    # At this point, need to add members for this version
+    uc[wire_version] = dict(members = members)
+
+    # Per member processing:
+    #  Add to union list (I'm sure there's a better way)
+    #  Check if it's a list
+    union = uc["union"]
+    if not cls in of_g.ordered_members:
+        of_g.ordered_members[cls] = []
+    for member in members:
+        m_name = member["name"]
+        m_type = member["m_type"]
+        if m_name.find("pad") == 0:
+            continue
+        if m_name in union:
+            if not m_type == union[m_name]["m_type"]:
+                debug("ERROR:   CLASS: %s. VERSION %d. MEMBER: %s. TYPE: %s" %
+                      (cls, wire_version, m_name, m_type))
+                debug("    Type conflict adding member to unified set.")
+                debug("    Current union[%s]:" % m_name)
+                debug(union[m_name])
+                sys.exit(1)
+        else:
+            union[m_name] = dict(m_type=m_type, memid=memid)
+            memid += 1
+        if not m_name in of_g.ordered_members[cls]:
+            of_g.ordered_members[cls].append(m_name)
+
+def update_offset(cls, wire_version, name, offset, m_type):
+    """
+    Update (and return) the offset based on type.
+    @param cls The parent class
+    @param wire_version The wire version being processed
+    @param name The name of the data member
+    @param offset The current offset
+    @param m_type The type declaration being processed
+    @returns A pair (next_offset, len_update)  next_offset is the new offset
+    of the next object or -1 if this is a var-length object.  len_update
+    is the increment that should be added to the length.  Note that (for
+    of_match_v3) it is variable length, but it adds 8 bytes to the fixed
+    length of the object
+    If offset is already -1, do not update
+    Otherwise map to base type and count and update (if possible)
+    """
+    if offset < 0:    # Don't update offset once set to -1
+        return offset, 0
+
+    count, base_type = c_parse_utils.type_dec_to_count_base(m_type)
+
+    len_update = 0
+    if base_type in of_g.of_mixed_types:
+        base_type = of_g.of_mixed_types[base_type][wire_version]
+
+    base_class = base_type[:-2]
+    if (base_class, wire_version) in of_g.is_fixed_length:
+        bytes = of_g.base_length[(base_class, wire_version)]
+    else:
+        if base_type == "of_match_v3_t":
+            # This is a special case: it has non-zero min length
+            # but is variable length
+            bytes = -1
+            len_update = 8
+        elif base_type in of_g.of_base_types:
+            bytes = of_g.of_base_types[base_type]["bytes"]
+        else:
+            print "UNKNOWN TYPE for %s %s: %s" % (cls, name, base_type)
+            log("UNKNOWN TYPE for %s %s: %s" % (cls, name, base_type))
+            bytes = -1
+
+    # If bytes
+    if bytes > 0:
+        len_update = count * bytes
+
+    if bytes == -1:
+        return -1, len_update
+
+    return offset + (count * bytes), len_update
+
+def calculate_offsets_and_lengths(ordered_classes, classes, wire_version):
+    """
+    Generate the offsets for fixed offset class members
+    Also calculate the class_sizes when possible.
+
+    @param classes The classes to process
+    @param wire_version The wire version for this set of classes
+
+    Updates global variables
+    """
+
+    lists = set()
+
+    # Generate offsets
+    for cls in ordered_classes:
+        fixed_offset = 0 # The last "good" offset seen
+        offset = 0
+        last_offset = 0
+        last_name = "-"
+        for member in classes[cls]:
+            m_type = member["m_type"]
+            name = member["name"]
+            if last_offset == -1:
+                if name == "pad":
+                    log("Skipping pad for special offset for %s" % cls)
+                else:
+                    log("SPECIAL OFS: Member %s (prev %s), class %s ver %d" % 
+                          (name, last_name, cls, wire_version))
+                    if (((cls, name) in of_g.special_offsets) and
+                        (of_g.special_offsets[(cls, name)] != last_name)):
+                        debug("ERROR: special offset prev name changed")
+                        debug("  cls %s. name %s. version %d. was %s. now %s" %
+                              cls, name, wire_version, 
+                              of_g.special_offsets[(cls, name)], last_name)
+                        sys.exit(1)
+                    of_g.special_offsets[(cls, name)] = last_name
+
+            member["offset"] = offset
+            if m_type.find("list(") == 0:
+                (list_name, base_type) = loxi_utils.list_name_extract(m_type)
+                lists.add(list_name)
+                member["m_type"] = list_name + "_t"
+                offset = -1
+            elif m_type.find("struct") == 0:
+                debug("ERROR found struct: %s.%s " % (cls, name))
+                sys.exit(1)
+            elif m_type == "octets":
+                log("offset gen skipping octets: %s.%s " % (cls, name))
+                offset = -1
+            else:
+                offset, len_update = update_offset(cls, wire_version, name, 
+                                                  offset, m_type)
+                if offset != -1:
+                    fixed_offset = offset
+                else:
+                    fixed_offset += len_update
+                    log("offset is -1 for %s.%s version %d " % 
+                        (cls, name, wire_version))
+            last_offset = offset
+            last_name = name
+        of_g.base_length[(cls, wire_version)] = fixed_offset
+        if (offset != -1):
+            of_g.is_fixed_length.add((cls, wire_version))
+    for list_type in lists:
+        classes[list_type] = []
+        of_g.ordered_classes[wire_version].append(list_type)
+        of_g.base_length[(list_type, wire_version)] = 0
+
+def process_input_file(filename):
+    """
+    Process an input file
+
+    @param filename The input filename
+
+    @returns (wire_version, classes), where wire_version is the integer wire
+    protocol number and classes is the dict of all classes processed from the
+    file.
+
+    @todo Add support for parsing enums like we do structs
+    """
+
+    # Parse the input file
+    try:
+        ast = parser.parse(open(filename, 'r').read())
+    except pyparsing.ParseBaseException as e:
+        print "Parse error in %s: %s" % (os.path.basename(filename), str(e))
+        sys.exit(1)
+
+    ofinput = of_g.OFInput()
+
+    # Now for each structure, generate lists for each member
+    for s in ast:
+        if s[0] == 'struct':
+            name = s[1].replace("ofp_", "of_", 1)
+            members = [dict(m_type=x[0], name=x[1]) for x in s[2]]
+            ofinput.classes[name] = members
+            ofinput.ordered_classes.append(name)
+            if name in type_maps.inheritance_map:
+                # Clone class into header class and add to list
+                ofinput.classes[name + "_header"] = members[:]
+                ofinput.ordered_classes.append(name + "_header")
+        elif s[0] == 'metadata':
+            if s[1] == 'version':
+                log("Found version: wire version " + s[2])
+                if s[2] == 'any':
+                    ofinput.wire_versions.update(of_g.wire_ver_map.keys())
+                elif int(s[2]) in of_g.supported_wire_protos:
+                    ofinput.wire_versions.add(int(s[2]))
+                else:
+                    debug("Unrecognized wire protocol version")
+                    sys.exit(1)
+                found_wire_version = True
+
+    if not ofinput.wire_versions:
+        debug("Missing #version metadata")
+        sys.exit(1)
+
+    return ofinput
+
+def order_and_assign_object_ids():
+    """
+    Order all classes and assign object ids to all classes.
+
+    This is done to promote a reasonable order of the objects, putting
+    messages first followed by non-messages.  No assumptions should be
+    made about the order, nor about contiguous numbering.  However, the
+    numbers should all be reasonably small allowing arrays indexed by 
+    these enum values to be defined.
+    """
+
+    # Generate separate message and non-message ordered lists
+    for cls in of_g.unified:
+        if loxi_utils.class_is_message(cls):
+            of_g.ordered_messages.append(cls)
+        elif loxi_utils.class_is_list(cls):
+            of_g.ordered_list_objects.append(cls)
+        else:
+            of_g.ordered_non_messages.append(cls)
+
+    of_g.ordered_pseudo_objects.append("of_stats_request")
+    of_g.ordered_pseudo_objects.append("of_stats_reply")
+    of_g.ordered_pseudo_objects.append("of_flow_mod")
+
+    of_g.ordered_messages.sort()
+    of_g.ordered_pseudo_objects.sort()
+    of_g.ordered_non_messages.sort()
+    of_g.ordered_list_objects.sort()
+    of_g.standard_class_order.extend(of_g.ordered_messages)
+    of_g.standard_class_order.extend(of_g.ordered_non_messages)
+    of_g.standard_class_order.extend(of_g.ordered_list_objects)
+
+    # This includes pseudo classes for which most code is not generated
+    of_g.all_class_order.extend(of_g.ordered_messages)
+    of_g.all_class_order.extend(of_g.ordered_non_messages)
+    of_g.all_class_order.extend(of_g.ordered_list_objects)
+    of_g.all_class_order.extend(of_g.ordered_pseudo_objects)
+
+    # Assign object IDs
+    for cls in of_g.ordered_messages:
+        of_g.unified[cls]["object_id"] = of_g.object_id
+        of_g.object_id += 1
+    for cls in of_g.ordered_non_messages:
+        of_g.unified[cls]["object_id"] = of_g.object_id
+        of_g.object_id += 1
+    for cls in of_g.ordered_list_objects:
+        of_g.unified[cls]["object_id"] = of_g.object_id
+        of_g.object_id += 1
+    for cls in of_g.ordered_pseudo_objects:
+        of_g.unified[cls] = {}
+        of_g.unified[cls]["object_id"] = of_g.object_id
+        of_g.object_id += 1
+
+def process_canonical_file(version, filename):
+    """
+    Read in contents of openflow.h file filename and process it
+    @param version The wire version number
+    @param filename The name of the openflow header file for this version
+
+    Updates of_g.identifiers dictionary.  See of_g.py
+    """
+    log("Processing canonical file %s, version %d" % (filename, version))
+    f = open(filename, 'r')
+    all_lines = f.readlines()
+    contents = " ".join(all_lines)
+    identifiers.add_identifiers(of_g.identifiers, of_g.identifiers_by_group,
+                                version, contents)
+
+
+def initialize_versions():
+    """
+    Create an empty datastructure for each target version.
+    """
+
+    for wire_version in of_g.target_version_list:
+        version_name = of_g.of_version_wire2name[wire_version]
+        of_g.wire_ver_map[wire_version] = version_name
+        versions[version_name] = dict(
+            version_name = version_name,
+            wire_version = wire_version,
+            classes = {})
+        of_g.ordered_classes[wire_version] = []
+
+
+def read_input():
+    """
+    Read in from files given on command line and update global state
+
+    @fixme Should select versions to support from command line
+    """
+
+    filenames = sorted(glob.glob("%s/openflow_input/*" % root_dir))
+
+    for filename in filenames:
+        log("Processing struct file: " + filename)
+        ofinput = process_input_file(filename)
+
+        # Populate global state
+        for wire_version in ofinput.wire_versions:
+            version_name = of_g.of_version_wire2name[wire_version]
+            versions[version_name]['classes'].update(copy.deepcopy(ofinput.classes))
+            of_g.ordered_classes[wire_version].extend(ofinput.ordered_classes)
+
+def add_extra_classes():
+    """
+    Add classes that are generated by Python code instead of from the
+    input files.
+    """
+
+    for wire_version in [of_g.VERSION_1_2, of_g.VERSION_1_3]:
+        version_name = of_g.of_version_wire2name[wire_version]
+        oxm.add_oxm_classes_1_2(versions[version_name]['classes'], wire_version)
+
+def analyze_input():
+    """
+    Add information computed from the input, including offsets and
+    lengths of struct members and the set of list and action_id types.
+    """
+
+    # Generate action_id classes for OF 1.3
+    for wire_version, ordered_classes in of_g.ordered_classes.items():
+        if not wire_version in [of_g.VERSION_1_3]:
+            continue
+        classes = versions[of_g.of_version_wire2name[wire_version]]['classes']
+        for cls in ordered_classes:
+            if not loxi_utils.class_is_action(cls):
+                continue
+            action = cls[10:]
+            if action == '' or action == 'header':
+                continue
+            name = "of_action_id_" + action
+            members = classes["of_action"][:]
+            of_g.ordered_classes[wire_version].append(name)
+            if type_maps.action_id_is_extension(name, wire_version):
+                # Copy the base action classes thru subtype
+                members = classes["of_action_" + action][:4]
+            classes[name] = members
+
+    # @fixme If we support extended actions in OF 1.3, need to add IDs
+    # for them here
+
+    for wire_version in of_g.wire_ver_map.keys():
+        version_name = of_g.of_version_wire2name[wire_version]
+        calculate_offsets_and_lengths(
+            of_g.ordered_classes[wire_version],
+            versions[version_name]['classes'],
+            wire_version)
+
+def unify_input():
+    """
+    Create Unified View of Objects
+    """
+
+    global versions
+
+    # Add classes to unified in wire-format order so that it is easier 
+    # to generate things later
+    keys = versions.keys()
+    keys.sort(reverse=True)
+    for version in keys:
+        wire_version = versions[version]["wire_version"]
+        classes = versions[version]["classes"]
+        for cls in of_g.ordered_classes[wire_version]:
+            add_class(wire_version, cls, classes[cls])
+
+
+def log_all_class_info():
+    """
+    Log the results of processing the input
+
+    Debug function
+    """
+
+    for cls in of_g.unified:
+        for v in of_g.unified[cls]:
+            if type(v) == type(0):
+                log("cls: %s. ver: %d. base len %d. %s" %
+                    (str(cls), v, of_g.base_length[(cls, v)],
+                     loxi_utils.class_is_var_len(cls,v) and "not fixed"
+                     or "fixed"))
+                if "use_version" in of_g.unified[cls][v]:
+                    log("cls %s: v %d mapped to %d" % (str(cls), v, 
+                           of_g.unified[cls][v]["use_version"]))
+                if "members" in of_g.unified[cls][v]:
+                    for member in of_g.unified[cls][v]["members"]:
+                        log("   %-20s: type %-20s. offset %3d" %
+                            (member["name"], member["m_type"],
+                             member["offset"]))
+
+def generate_all_files():
+    """
+    Create the files for the language target
+    """
+    for (name, fn) in lang_module.targets.items():
+        path = of_g.options.install_dir + '/' + name
+        os.system("mkdir -p %s" % os.path.dirname(path))
+        with open(path, "w") as outfile:
+            fn(outfile, os.path.basename(name))
+        print("Wrote contents for " + name)
+
+if __name__ == '__main__':
+    of_g.loxigen_log_file = open("loxigen.log", "w")
+    of_g.loxigen_dbg_file = sys.stdout
+
+    of_g.process_commandline()
+    # @fixme Use command line params to select log
+
+    if not config_sanity_check():
+        debug("Config sanity check failed\n")
+        sys.exit(1)
+
+    # Import the language file
+    lang_file = "lang_%s" % of_g.options.lang
+    lang_module = __import__(lang_file)
+
+    # If list files, just list auto-gen files to stdout and exit
+    if of_g.options.list_files:
+        for name in lang_module.targets:
+            print of_g.options.install_dir + '/' + name
+        sys.exit(0)
+
+    log("\nGenerating files for target language %s\n" % of_g.options.lang)
+
+    # Generate identifier code
+    for version in of_g.target_version_list:
+        version_tag = of_g.param_version_names[version]
+        filename = "%s/canonical/openflow.h-%s" % (root_dir, version_tag)
+        process_canonical_file(version, filename)
+
+    initialize_versions()
+    read_input()
+    add_extra_classes()
+    analyze_input()
+    unify_input()
+    order_and_assign_object_ids()
+    log_all_class_info()
+    generate_all_files()
diff --git a/of_g.py b/of_g.py
new file mode 100644
index 0000000..87da1c9
--- /dev/null
+++ b/of_g.py
@@ -0,0 +1,524 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+##
+# @brief Global data structs for LOXI code generation
+#
+# @fixme This needs to be refactored and brought into the 21st century.
+#
+
+import sys
+from optparse import OptionParser
+# @fixme Replace with argparse
+
+################################################################
+#
+# Configuration global parameters
+#
+################################################################
+
+##
+# The map from wire protocol to enum identifier generated from input
+# This is built from the version-specific structs file info.
+# @fixme This should go away when the process structs file is updated
+wire_ver_map = {}
+
+##
+# Command line options
+options = {}
+
+##
+# Command line arguments
+args = []
+
+##@var config_default
+# The default configuration dictionary for LOXI code generation
+options_default = {
+    "lang"               : "c",
+    "version-list"       : "1.0 1.1 1.2 1.3",
+    "install-dir"        : "loxi_output",
+}
+
+##
+# The list of wire versions which are to be supported
+target_version_list = []
+
+def lang_normalize(lang):
+    """
+    Normalize the representation of the language 
+    """
+    return lang.lower()
+
+def version_list_normalize(vlist):
+    """
+    Normalize the version list and return as an array
+    """
+    out_list = []
+    # @fixme Map to OF version references
+    if vlist.find(',') > 0:
+        vlist = vlist.split(',')
+    else:
+        vlist = vlist.split()
+    vlist.sort()
+    for ver in vlist:
+        try:
+            out_list.append(of_param_version_map[ver])
+        except KeyError:
+            sys.stderr.write("Bad version input, %s" % str(ver))
+            sys.exit(1)
+
+    return out_list
+
+def process_commandline(default_vals=options_default):
+    """
+    Set up the options dictionary
+
+    @param cfg_dflt The default configuration dictionary
+    @return A pair (options, args) as per parser return
+    """
+    global options
+    global args
+    global target_version_list
+
+    parser = OptionParser(version="%prog 0.1")
+
+    #@todo Add options via dictionary
+    parser.add_option("--list-files", action="store_true", default=False,
+                      help="List output files generated")
+    parser.add_option("-l", "--lang", "--language",
+                      default=default_vals["lang"],
+                      help="Select the target language: c, python")
+    parser.add_option("-i", "--install-dir",
+                      default=default_vals["install-dir"],
+                      help="Directory to install generated files to (default %s)" % default_vals["install-dir"])
+    parser.add_option("-v", "--version-list", 
+                      default=default_vals["version-list"],
+                      help="Specify the versions to target as 1.0 1.1 etc")
+
+    (options, args) = parser.parse_args()
+
+    options.lang = lang_normalize(options.lang)
+    target_version_list = version_list_normalize(options.version_list)
+    target_version_list.sort()
+    return (options, args)
+
+##
+# The dictionary of config variables related to code
+#
+# @param gen_unified_fns  Boolean; Generate top level function definitions for
+# accessors which are independent of the version; the alternative is to only 
+# use the function pointers in the class definitions.  These functions support
+# better inlining optimizations.
+#
+# @param gen_fn_ptrs Boolean; Generate the functions pointed to by pointer
+# in the class (struct) definitions; the alternative is to only use the
+# unified (use_) functions
+#
+# @param use_obj_id  Use object IDs in struct defns   CURRENTLY NOT SUPPORTED
+# 
+# @param return_base_types For 'get' accessors, return values when possible.
+# Otherwise all values are returned thru a call by variable parameter
+#
+# @param use_static_inlines Generate low level accessors as static inline
+# and put them in header files rather than .c files.
+#
+# @param copy_semantics One of "read", "write" or "grow".  This defines the
+# way that buffer references are managed.  Currently on "read" is supported.
+#
+# @param encode_typedefs Use object and member IDs (rather than names)
+# when generating the names used for accessor function typedefs
+#
+# @param get_returns One of "error", "value", or "void"; 
+# CURRENTLY ONLY "error" IS SUPPORTED.  "error" means
+# all get operations return an error code.  "value" means return a base_type
+# value when possible or void if not.  "void" means always return void
+# and use a call-by-variable parameter
+#
+
+# @fixme These are still very C specific and should probably either
+# go into lang_c.py or be swallowed by command line option parsing
+code_gen_config = dict(
+    gen_unified_fns=True,
+#    gen_fn_ptrs=True,  # WARNING: Haven't tested with this in a while
+    gen_fn_ptrs=False,
+    use_obj_id=False,
+    use_static_inlines=False,
+    copy_semantics="read",  # Only read implemented: read, write, grow
+    encoded_typedefs=False,
+    get_returns="error",   # Only error implemented; error, value, void
+)
+
+## These members do not get normal accessors
+
+skip_members = ["version", "type", "length", "stats_type", "len",
+                "type_len", "actions_len", "_command"]
+
+## Some OpenFlow string length constants
+#
+# These are a few length constants needed for array processing
+ofp_constants = dict(
+    OF_MAX_TABLE_NAME_LEN = 32,
+    OF_MAX_PORT_NAME_LEN  = 16,
+    OF_ETH_ALEN = 6,
+    OF_DESC_STR_LEN   = 256,
+    OF_SERIAL_NUM_LEN = 32
+)
+
+## List of mixed data types
+#
+# This is a list of data types which require special treatment
+# because the underlying datatype has changed between versions.
+# The main example is port which went from 16 to 32 bits.  We
+# define per-version accessors for these types and those are
+# used in place of the normal ones.
+#
+# The wire protocol number is used to identify versions.  For now,
+# the value is the name of the type to use for that version
+#
+# This is the map between the external type (like of_port_no_t)
+# which is used by customers of this code and the internal 
+# datatypes (like uint16_t) that appear on the wire for a 
+# particular version.
+#
+of_mixed_types = dict(
+    of_port_no_t = {
+        1: "uint16_t",
+        2: "uint32_t",
+        3: "uint32_t",
+        4: "uint32_t",
+        "short_name":"port_no"
+        },
+    of_port_desc_t = {
+        1: "of_port_desc_t",
+        2: "of_port_desc_t",
+        3: "of_port_desc_t",
+        4: "of_port_desc_t",
+        "short_name":"port_desc"
+        },
+    of_fm_cmd_t = { # Flow mod command went from u16 to u8
+        1: "uint16_t",
+        2: "uint8_t",
+        3: "uint8_t",
+        4: "uint8_t",
+        "short_name":"fm_cmd"
+        },
+    of_wc_bmap_t = { # Wildcard bitmap
+        1: "uint32_t",
+        2: "uint32_t",
+        3: "uint64_t",
+        4: "uint64_t",
+        "short_name":"wc_bmap"
+        },
+    of_match_bmap_t = { # Match bitmap
+        1: "uint32_t",
+        2: "uint32_t",
+        3: "uint64_t",
+        4: "uint64_t",
+        "short_name":"match_bmap"
+        },
+    of_match_t = { # Match object
+        1: "of_match_v1_t",
+        2: "of_match_v2_t",
+        3: "of_match_v3_t",
+        4: "of_match_v3_t",  # Currently uses same match as 1.2 (v3).
+        "short_name":"match"
+        },
+)
+
+## Base data types
+#
+# The basic types; Value is a list: bytes, to_wire, from_wire
+# The accessors deal with endian, alignment and any other host/network
+# considerations.  These are common across all versions
+#
+# For get accessors, assume we memcpy from wire buf and then apply ntoh
+# For set accessors, assume we apply hton and then memcpy to wire buf
+#
+# to/from wire functions take a pointer to class and change in place
+of_base_types = dict(
+    char = dict(bytes=1, use_as_rv=1, short_name="char"),
+    uint8_t = dict(bytes=1, use_as_rv=1, short_name="u8"),
+    uint16_t = dict(bytes=2, to_w="u16_hton", from_w="u16_ntoh", use_as_rv=1,
+                    short_name="u16"),
+    uint32_t = dict(bytes=4, to_w="u32_hton", from_w="u32_ntoh", use_as_rv=1,
+                    short_name="u32"),
+    uint64_t = dict(bytes=8, to_w="u64_hton", from_w="u64_ntoh", use_as_rv=1,
+                    short_name="u64"),
+#    of_cookie_t = dict(bytes=8, to_w="u64_hton", from_w="u64_ntoh", use_as_rv=1#,
+#                    short_name="cookie"),
+#    of_counter_t = dict(bytes=8, to_w="u64_hton", from_w="u64_ntoh", use_as_rv=1,
+#                    short_name="counter"),
+    of_mac_addr_t = dict(bytes=6, short_name="mac"),
+    of_ipv6_t = dict(bytes=16, short_name="ipv6"),
+    of_port_name_t = dict(bytes=ofp_constants["OF_MAX_PORT_NAME_LEN"],
+                          short_name="port_name"),
+    of_table_name_t = dict(bytes=ofp_constants["OF_MAX_TABLE_NAME_LEN"],
+                           short_name="tab_name"),
+    of_desc_str_t = dict(bytes=ofp_constants["OF_DESC_STR_LEN"],
+                         short_name="desc_str"),
+    of_serial_num_t = dict(bytes=ofp_constants["OF_SERIAL_NUM_LEN"],
+                           short_name="ser_num"),
+    of_match_v1_t = dict(bytes=40, to_w="match_v1_hton", 
+                         from_w="match_v1_ntoh", 
+                         short_name="match_v1"),
+    of_match_v2_t = dict(bytes=88, to_w="match_v2_hton", 
+                         from_w="match_v2_ntoh", 
+                         short_name="match_v2"),
+    of_match_v3_t = dict(bytes=-1, to_w="match_v3_hton", 
+                         from_w="match_v3_ntoh", 
+                         short_name="match_v3"),
+#    of_match_v4_t = dict(bytes=-1, to_w="match_v4_hton", 
+#                         from_w="match_v4_ntoh", 
+#                         short_name="match_v4"),
+    of_octets_t = dict(bytes=-1, short_name="octets")
+)
+
+of_scalar_types = ["char", "uint8_t", "uint16_t", "uint32_t", "uint64_t",
+                   "of_port_no_t", "of_fm_cmd_t", "of_wc_bmap_t",
+                   "of_match_bmap_t", "of_port_name_t", "of_table_name_t",
+                   "of_desc_str_t", "of_serial_num_t", "of_mac_addr_t", 
+                   "of_ipv6_t"]
+
+base_object_members = """\
+    /* The control block for the underlying data buffer */
+    of_wire_object_t wire_object;
+    /* The LOCI type enum value of the object */
+    of_object_id_t object_id;
+
+    /*
+     * Objects need to track their "parent" so that updates to the
+     * object that affect its length can be pushed to the parent.
+     * Treat as private.
+     */
+    of_object_t *parent;
+
+    /*
+     * Not all objects have length and version on the wire so we keep
+     * them here.  NOTE: Infrastructure manages length and version.
+     * Treat length as private and version as read only.
+     */
+    int length;
+    of_version_t version;
+
+    /*
+     * Many objects have a length and/or type represented in the wire buffer
+     * These accessors get and set those value when present.  Treat as private.
+     */
+    of_wire_length_get_f wire_length_get;
+    of_wire_length_set_f wire_length_set;
+    of_wire_type_get_f wire_type_get;
+    of_wire_type_set_f wire_type_set;
+
+    of_object_track_info_t track_info;
+
+    /*
+     * Metadata available for applications.  Ensure 8-byte alignment, but
+     * that buffer is at least as large as requested.  This data is not used
+     * or inspected by LOCI.
+     */
+    uint64_t metadata[(OF_OBJECT_METADATA_BYTES + 7) / 8];
+"""
+
+
+##
+# LOXI identifiers
+#
+# Dict indexed by identifier name.  Each entry contains the information
+# as a DotDict with the following keys:
+# values: A dict indexed by wire version giving each verion's value or None
+# common: The common value to use for this identifier at the LOXI top level (TBD)
+# all_same: If True, all the values across all versions are the same
+# ofp_name: The original name for the identifier
+# ofp_group: The ofp enumerated type if defined
+
+identifiers = {}
+
+##
+# Identifiers by original group
+# Keys are the original group names.  Value is a list of LOXI identifiers
+
+identifiers_by_group = {}
+
+## Ordered list of class names
+# This is per-wire-version and is a list of the classes in the order
+# they appear in the file.  That is important because of the assumption
+# that data members are defined before they are included in a superclass.
+ordered_classes = {} # Indexed by wire version
+
+## Per class ordered list of member names
+ordered_members = {}
+
+## Ordered list of message classes
+ordered_messages = []
+
+## Ordered list of non message classes
+ordered_non_messages = []
+
+## The objects that need list support
+ordered_list_objects = []
+
+## Stats request/reply are pseudo objects
+ordered_pseudo_objects = []
+
+## Standard order is normally messages followed by non-messages
+standard_class_order = []
+
+## All classes in order, including psuedo classes for which most code
+# is not generated.
+all_class_order = []
+
+## Map from class, wire_version to size of fixed part of class
+base_length = {}
+
+## Boolean indication of variable length, per class, wire_version, 
+is_fixed_length = set()
+
+## The global object ID counter
+object_id = 1  # Reserve 0 for root object
+
+## The unified view of all classes.  See internal readme.
+unified = {}
+
+## Indicates data members with non-fixed start offsets
+# Indexed by (cls, version, member-name) and value is prev-member-name
+special_offsets = {}
+
+## Define Python variables with integer wire version values
+VERSION_1_0 = 1
+VERSION_1_1 = 2
+VERSION_1_2 = 3
+VERSION_1_3 = 4
+
+# Ignore version for some functions
+VERSION_ANY = -1
+
+## @var supported_wire_protos
+# The wire protocols this version of LoxiGen supports
+supported_wire_protos = set([1, 2, 3, 4])
+version_names = {1:"VERSION_1_0", 2:"VERSION_1_1", 3:"VERSION_1_2",
+                 4:"VERSION_1_3"}
+short_version_names = {1:"OF_1_0", 2:"OF_1_1", 3:"OF_1_2", 4:"OF_1_3"}
+param_version_names = {1:"1.0", 2:"1.1", 3:"1.2", 4:"1.3"}
+
+##
+# Maps and ranges related to versioning
+
+# For parameter version indications
+of_param_version_map = {
+    "1.0":VERSION_1_0,
+    "1.1":VERSION_1_1,
+    "1.2":VERSION_1_2,
+    "1.3":VERSION_1_3
+    }
+
+# For parameter version indications
+of_version_map = {
+    "1.0":VERSION_1_0,
+    "1.1":VERSION_1_1,
+    "1.2":VERSION_1_2,
+    "1.3":VERSION_1_3
+    }
+
+# The iteration object that gives the wire versions supported
+of_version_range = [VERSION_1_0, VERSION_1_1, VERSION_1_2, VERSION_1_3]
+of_version_max = VERSION_1_3
+
+
+of_version_name2wire = dict(
+    OF_VERSION_1_0=VERSION_1_0,
+    OF_VERSION_1_1=VERSION_1_1,
+    OF_VERSION_1_2=VERSION_1_2,
+    OF_VERSION_1_3=VERSION_1_3
+    )
+
+of_version_wire2name = {
+    VERSION_1_0:"OF_VERSION_1_0",
+    VERSION_1_1:"OF_VERSION_1_1",
+    VERSION_1_2:"OF_VERSION_1_2",
+    VERSION_1_3:"OF_VERSION_1_3"
+    }
+
+
+################################################################
+#
+# Experimenters, vendors, extensions
+#
+# Although the term "experimenter" is used for identifying 
+# external extension definitions, we generally use the term
+# extension when refering to the messages or objects themselves.
+#
+# Conventions:
+#
+# Extension messages should start with of_<experimenter>_
+# Extension actions should start with of_<experimenter>_action_
+# Extension instructions should start with of_<experimenter>_instructions_
+#
+# Currently, the above conventions are not enforced; the mapping
+# is done brute force in type_maps.py
+#
+################################################################
+
+# The map of known experimenters to their experimenter IDs
+experimenter_name_to_id = dict(
+    bsn = 0x005c16c7,
+    nicira = 0x00002320,
+    openflow = 0x000026e1
+    )
+
+def experimenter_name_lookup(experimenter_id):
+    """
+    Map an experimenter ID to its LOXI recognized name string
+    """
+    for name, id in of_g.experimenter_name_to_id.items():
+        if id == experimenter_id:
+            return name
+    return None
+
+################################################################
+#
+# Debug
+#
+################################################################
+
+loxigen_dbg_file = sys.stdout
+loxigen_log_file = sys.stdout
+
+################################################################
+#
+# Internal representation
+#
+################################################################
+
+class OFInput(object):
+    """
+    A single LOXI input file.
+    """
+
+    def __init__(self):
+        self.wire_versions = set()
+        self.classes = {}
+        self.ordered_classes = []
diff --git a/openflow_input/bsn_get_interfaces b/openflow_input/bsn_get_interfaces
new file mode 100644
index 0000000..90060ee
--- /dev/null
+++ b/openflow_input/bsn_get_interfaces
@@ -0,0 +1,55 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version any
+
+struct ofp_bsn_get_interfaces_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_GET_INTERFACES_REQUEST
+};
+
+struct ofp_bsn_interface {
+    of_mac_addr_t hw_addr;
+    uint16_t pad;
+    of_port_name_t name;
+    uint32_t ipv4_addr;
+    uint32_t ipv4_netmask;
+};
+
+struct ofp_bsn_get_interfaces_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_GET_INTERFACES_REQUEST
+    list(of_bsn_interface_t) interfaces;
+};
diff --git a/openflow_input/bsn_ip_mask b/openflow_input/bsn_ip_mask
new file mode 100644
index 0000000..03f233b
--- /dev/null
+++ b/openflow_input/bsn_ip_mask
@@ -0,0 +1,63 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 1
+
+struct ofp_bsn_set_ip_mask {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;  // bsn 0x005c16c7,
+    uint32_t subtype;       // 0
+    uint8_t index;
+    uint8_t[3] pad;
+    uint32_t mask;
+};
+
+struct ofp_bsn_get_ip_mask_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;  // bsn 0x005c16c7,
+    uint32_t subtype;       // 1
+    uint8_t index;
+    uint8_t[7] pad;
+};
+
+struct ofp_bsn_get_ip_mask_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;  // bsn 0x005c16c7,
+    uint32_t subtype;       // 2
+    uint8_t index;
+    uint8_t[3] pad;
+    uint32_t mask;
+};
diff --git a/openflow_input/bsn_mirror b/openflow_input/bsn_mirror
new file mode 100644
index 0000000..c873595
--- /dev/null
+++ b/openflow_input/bsn_mirror
@@ -0,0 +1,74 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version any
+
+// BSN mirror action
+struct ofp_action_bsn_mirror {
+    uint16_t type;      // OF_ACTION_TYPE_EXPERIMENTER
+    uint16_t len;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN 
+    uint32_t subtype;   // ACTION_BSN_MIRROR 
+    uint32_t dest_port; // mirror destination port
+    uint32_t vlan_tag;  // VLAN tag for mirrored packet (TPID+TCI) (0 == none)
+    uint8_t copy_stage; // 0 == ingress, 1 == egress 
+    uint8_t[3] pad;
+};
+
+// BSN mirroring messages
+struct ofp_bsn_set_mirroring {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_MIRRORING_SET
+    uint8_t report_mirror_ports;
+    uint8_t[3] pad;
+};
+
+struct ofp_bsn_get_mirroring_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_MIRRORING_GET_REQUEST
+    uint8_t report_mirror_ports;
+    uint8_t[3] pad;
+};
+
+struct ofp_bsn_get_mirroring_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_MIRRORING_GET_REPLY
+    uint8_t report_mirror_ports;
+    uint8_t[3] pad;
+};
diff --git a/openflow_input/bsn_set_tunnel_dst b/openflow_input/bsn_set_tunnel_dst
new file mode 100644
index 0000000..6b16d96
--- /dev/null
+++ b/openflow_input/bsn_set_tunnel_dst
@@ -0,0 +1,37 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version any
+
+// BSN set tunnel destination IP action
+struct ofp_action_bsn_set_tunnel_dst {
+    uint16_t type;      // OF_ACTION_TYPE_EXPERIMENTER
+    uint16_t len;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN 
+    uint32_t subtype;   // ACTION_BSN_SET_TUNNEL_DST
+    uint32_t dst; // tunnel destination IP
+};
diff --git a/openflow_input/bsn_shell b/openflow_input/bsn_shell
new file mode 100644
index 0000000..ea85571
--- /dev/null
+++ b/openflow_input/bsn_shell
@@ -0,0 +1,59 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 1
+
+struct ofp_bsn_shell_command {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_SHELL_COMMAND
+    uint32_t service;
+    of_octets_t data;
+};
+
+struct ofp_bsn_shell_output {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_SHELL_OUTPUT
+    of_octets_t data;
+};
+
+struct ofp_bsn_shell_status {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_BSN
+    uint32_t subtype;   // BSN_SHELL_STATUS
+    uint32_t status;
+};
diff --git a/openflow_input/bsn_table_mod b/openflow_input/bsn_table_mod
new file mode 100644
index 0000000..418c716
--- /dev/null
+++ b/openflow_input/bsn_table_mod
@@ -0,0 +1,40 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 1
+
+// This is the 1.1+ table mod message which we back port to 1.0
+// for use inside components
+struct ofp_table_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    uint32_t config;
+};
diff --git a/openflow_input/nicira_dec_ttl b/openflow_input/nicira_dec_ttl
new file mode 100644
index 0000000..b507d57
--- /dev/null
+++ b/openflow_input/nicira_dec_ttl
@@ -0,0 +1,37 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version any
+
+struct ofp_action_nicira_dec_ttl {
+    uint16_t type;      // OF_ACTION_TYPE_EXPERIMENTER
+    uint16_t len;
+    uint32_t experimenter;    // OF_EXPERIMENTER_ID_NICIRA
+    uint16_t subtype;         // 18
+    uint16_t pad;
+    uint32_t pad2;
+};
diff --git a/openflow_input/nicira_role b/openflow_input/nicira_role
new file mode 100644
index 0000000..06d7c7f
--- /dev/null
+++ b/openflow_input/nicira_role
@@ -0,0 +1,48 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 1
+
+struct ofp_nicira_controller_role_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter; // OF_EXPERIMENTER_ID_NICIRA 0x00002320
+    uint32_t subtype;      // 10
+    uint32_t role;         // 0 other, 1 master, 2 slave
+};
+
+struct ofp_nicira_controller_role_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter; // OF_EXPERIMENTER_ID_NICIRA 0x00002320
+    uint32_t subtype;      // 11
+    uint32_t role;         // 0 other, 1 master, 2 slave
+};
diff --git a/openflow_input/standard-1.0 b/openflow_input/standard-1.0
new file mode 100644
index 0000000..943d840
--- /dev/null
+++ b/openflow_input/standard-1.0
@@ -0,0 +1,669 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 1
+
+struct ofp_header {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_hello {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_echo_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_echo_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_experimenter {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+struct ofp_barrier_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_barrier_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_set_config {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_port_desc {
+    of_port_no_t port_no;
+    of_mac_addr_t hw_addr;
+    of_port_name_t name;
+    uint32_t config;
+    uint32_t state;
+    uint32_t curr;
+    uint32_t advertised;
+    uint32_t supported;
+    uint32_t peer;
+};
+
+struct ofp_features_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_features_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t datapath_id;
+    uint32_t n_buffers;
+    uint8_t n_tables;
+    uint8_t[3] pad;
+    uint32_t capabilities;
+    uint32_t actions;
+    list(of_port_desc_t) ports;
+};
+
+struct ofp_port_status {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t reason;
+    uint8_t[7] pad;
+    of_port_desc_t desc;
+};
+
+struct ofp_port_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port_no;
+    of_mac_addr_t hw_addr;
+    uint32_t config;
+    uint32_t mask;
+    uint32_t advertise;
+    uint8_t[4] pad;
+};
+
+struct ofp_packet_in {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    uint16_t total_len;
+    of_port_no_t in_port;
+    uint8_t reason;
+    uint8_t pad;
+    of_octets_t data;
+};
+
+struct ofp_action_output {
+    uint16_t type;
+    uint16_t len;
+    of_port_no_t port;
+    uint16_t max_len;
+};
+
+struct ofp_action_set_vlan_vid {
+    uint16_t type;
+    uint16_t len;
+    uint16_t vlan_vid;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_strip_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_vlan_pcp {
+    uint16_t type;
+    uint16_t len;
+    uint8_t vlan_pcp;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_set_dl_src {
+    uint16_t type;
+    uint16_t len;
+    of_mac_addr_t dl_addr;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_set_dl_dst {
+    uint16_t type;
+    uint16_t len;
+    of_mac_addr_t dl_addr;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_set_nw_src {
+    uint16_t type;
+    uint16_t len;
+    uint32_t nw_addr;
+};
+
+struct ofp_action_set_nw_dst {
+    uint16_t type;
+    uint16_t len;
+    uint32_t nw_addr;
+};
+
+struct ofp_action_set_tp_src {
+    uint16_t type;
+    uint16_t len;
+    uint16_t tp_port;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_tp_dst {
+    uint16_t type;
+    uint16_t len;
+    uint16_t tp_port;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_nw_tos {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_tos;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_action_enqueue {
+    uint16_t type;
+    uint16_t len;
+    of_port_no_t port;
+    uint8_t[6] pad;
+    uint32_t queue_id;
+};
+
+struct ofp_action {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_packet_out {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    of_port_no_t in_port;
+    uint16_t actions_len;
+    list(of_action_t) actions;
+    of_octets_t data;
+};
+
+struct ofp_match_v1 {
+    of_wc_bmap_t wildcards;
+    of_port_no_t in_port;
+    of_mac_addr_t eth_src;
+    of_mac_addr_t eth_dst;
+    uint16_t vlan_vid;
+    uint8_t vlan_pcp;
+    uint8_t[1] pad1;
+    uint16_t eth_type;
+    uint8_t ip_dscp;
+    uint8_t ip_proto;
+    uint8_t[2] pad2;
+    uint32_t ipv4_src;
+    uint32_t ipv4_dst;
+    uint16_t tcp_src;
+    uint16_t tcp_dst;
+};
+
+struct ofp_flow_add {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint16_t flags;
+    list(of_action_t) actions;
+};
+
+struct ofp_flow_modify {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint16_t flags;
+    list(of_action_t) actions;
+};
+
+struct ofp_flow_modify_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint16_t flags;
+    list(of_action_t) actions;
+};
+
+struct ofp_flow_delete {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint16_t flags;
+    list(of_action_t) actions;
+};
+
+struct ofp_flow_delete_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint16_t flags;
+    list(of_action_t) actions;
+};
+
+struct ofp_flow_removed {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_match_t match;
+    uint64_t cookie;
+    uint16_t priority;
+    uint8_t reason;
+    uint8_t[1] pad;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t idle_timeout;
+    uint8_t[2] pad2;
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+
+struct ofp_error_msg {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t err_type;
+    uint16_t code;
+    of_octets_t data;
+};
+
+// STATS ENTRIES: flow, table, port, queue,
+struct ofp_flow_stats_entry {
+    uint16_t length;
+    uint8_t table_id;
+    uint8_t pad;
+    of_match_t match;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t priority;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint8_t[6] pad2;
+    uint64_t cookie;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    list(of_action_t) actions;
+};
+
+struct ofp_table_stats_entry {
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_table_name_t name;
+    of_wc_bmap_t wildcards;
+    uint32_t max_entries;
+    uint32_t active_count;
+    uint64_t lookup_count;
+    uint64_t matched_count;
+};
+
+struct ofp_port_stats_entry {
+    of_port_no_t port_no;
+    uint8_t[6] pad;
+    uint64_t rx_packets;
+    uint64_t tx_packets;
+    uint64_t rx_bytes;
+    uint64_t tx_bytes;
+    uint64_t rx_dropped;
+    uint64_t tx_dropped;
+    uint64_t rx_errors;
+    uint64_t tx_errors;
+    uint64_t rx_frame_err;
+    uint64_t rx_over_err;
+    uint64_t rx_crc_err;
+    uint64_t collisions;
+};
+
+struct ofp_queue_stats_entry {
+    of_port_no_t port_no;
+    uint8_t[2] pad;
+    uint32_t queue_id;
+    uint64_t tx_bytes;
+    uint64_t tx_packets;
+    uint64_t tx_errors;
+};
+
+// STATS request/reply:  Desc, flow, agg, table, port, queue
+
+struct ofp_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+};
+
+struct ofp_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    of_desc_str_t mfr_desc;
+    of_desc_str_t hw_desc;
+    of_desc_str_t sw_desc;
+    of_serial_num_t serial_num;
+    of_desc_str_t dp_desc;
+};
+
+struct ofp_flow_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    of_match_t match;
+    uint8_t table_id;
+    uint8_t pad;
+    of_port_no_t out_port;
+};
+
+struct ofp_flow_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    list(of_flow_stats_entry_t) entries;
+};
+
+struct ofp_aggregate_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    of_match_t match;
+    uint8_t table_id;
+    uint8_t pad;
+    of_port_no_t out_port;
+};
+
+struct ofp_aggregate_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    uint32_t flow_count;
+    uint8_t[4] pad;
+};
+
+struct ofp_table_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+};
+
+struct ofp_table_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    list(of_table_stats_entry_t) entries;
+};
+
+struct ofp_port_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    of_port_no_t port_no;
+    uint8_t[6] pad;
+};
+
+struct ofp_port_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    list(of_port_stats_entry_t) entries;
+};
+
+struct ofp_queue_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    of_port_no_t port_no;
+    uint8_t[2] pad;
+    uint32_t queue_id;
+};
+
+struct ofp_queue_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    list(of_queue_stats_entry_t) entries;
+};
+
+struct ofp_experimenter_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_experimenter_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+// END OF STATS OBJECTS
+
+struct ofp_queue_prop {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_prop_min_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_packet_queue {
+    uint32_t queue_id;
+    uint16_t len;
+    uint8_t[2] pad;
+    list(of_queue_prop_t) properties;
+};
+
+struct ofp_queue_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[2] pad;
+};
+
+struct ofp_queue_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[6] pad;
+    list(of_packet_queue_t) queues;
+};
diff --git a/openflow_input/standard-1.1 b/openflow_input/standard-1.1
new file mode 100644
index 0000000..66a4425
--- /dev/null
+++ b/openflow_input/standard-1.1
@@ -0,0 +1,966 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 2
+
+struct ofp_header {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_hello {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_echo_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_echo_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_experimenter {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+struct ofp_barrier_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_barrier_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_set_config {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_table_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    uint32_t config;
+};
+
+struct ofp_port_desc {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    of_port_name_t name;
+    uint32_t config;
+    uint32_t state;
+    uint32_t curr;
+    uint32_t advertised;
+    uint32_t supported;
+    uint32_t peer;
+    uint32_t curr_speed;
+    uint32_t max_speed;
+};
+
+struct ofp_features_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_features_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t datapath_id;
+    uint32_t n_buffers;
+    uint8_t n_tables;
+    uint8_t[3] pad;
+    uint32_t capabilities;
+    uint32_t reserved;
+    list(of_port_desc_t) ports;
+};
+
+struct ofp_port_status {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t reason;
+    uint8_t[7] pad;
+    of_port_desc_t desc;
+};
+
+struct ofp_port_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    uint32_t config;
+    uint32_t mask;
+    uint32_t advertise;
+    uint8_t[4] pad3;
+};
+
+struct ofp_packet_in {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    of_port_no_t in_port;
+    of_port_no_t in_phy_port;
+    uint16_t total_len;
+    uint8_t reason;
+    uint8_t table_id;
+    of_octets_t data;
+};
+
+struct ofp_action_output {
+    uint16_t type;
+    uint16_t len;
+    of_port_no_t port;
+    uint16_t max_len;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_set_vlan_vid {
+    uint16_t type;
+    uint16_t len;
+    uint16_t vlan_vid;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_vlan_pcp {
+    uint16_t type;
+    uint16_t len;
+    uint8_t vlan_pcp;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_set_dl_src {
+    uint16_t type;
+    uint16_t len;
+    of_mac_addr_t dl_addr;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_set_dl_dst {
+    uint16_t type;
+    uint16_t len;
+    of_mac_addr_t dl_addr;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_set_nw_src {
+    uint16_t type;
+    uint16_t len;
+    uint32_t nw_addr;
+};
+
+struct ofp_action_set_nw_dst {
+    uint16_t type;
+    uint16_t len;
+    uint32_t nw_addr;
+};
+
+struct ofp_action_set_nw_tos {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_tos;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_set_nw_ecn {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_ecn;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_set_tp_src {
+    uint16_t type;
+    uint16_t len;
+    uint16_t tp_port;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_tp_dst {
+    uint16_t type;
+    uint16_t len;
+    uint16_t tp_port;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_copy_ttl_out {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_copy_ttl_in {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_mpls_label {
+    uint16_t type;
+    uint16_t len;
+    uint32_t mpls_label;
+};
+
+struct ofp_action_set_mpls_tc {
+    uint16_t type;
+    uint16_t len;
+    uint8_t mpls_tc;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_set_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t mpls_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_queue {
+    uint16_t type;
+    uint16_t len;
+    uint32_t queue_id;
+};
+
+struct ofp_action_group {
+    uint16_t type;
+    uint16_t len;
+    uint32_t group_id;
+};
+
+struct ofp_action_set_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_action {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_packet_out {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    of_port_no_t in_port;
+    uint16_t actions_len;
+    uint8_t[6] pad;
+    list(of_action_t) actions;
+    of_octets_t data;
+};
+
+struct ofp_match_v2 {
+    uint16_t type;
+    uint16_t length;
+    of_port_no_t in_port;
+    of_wc_bmap_t wildcards;
+    of_mac_addr_t eth_src;
+    of_mac_addr_t eth_src_mask;
+    of_mac_addr_t eth_dst;
+    of_mac_addr_t eth_dst_mask;
+    uint16_t vlan_vid;
+    uint8_t vlan_pcp;
+    uint8_t[1] pad1;
+    uint16_t eth_type;
+    uint8_t ip_dscp;
+    uint8_t ip_proto;
+    uint32_t ipv4_src;
+    uint32_t ipv4_src_mask;
+    uint32_t ipv4_dst;
+    uint32_t ipv4_dst_mask;
+    uint16_t tcp_src;
+    uint16_t tcp_dst;
+    uint32_t mpls_label;
+    uint8_t mpls_tc;
+    uint8_t[3] pad2;
+    uint64_t metadata;
+    uint64_t metadata_mask;
+};
+
+struct ofp_instruction {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction_goto_table {
+    uint16_t type;
+    uint16_t len;
+    uint8_t table_id;
+    uint8_t[3] pad;
+};
+
+struct ofp_instruction_write_metadata {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint64_t metadata;
+    uint64_t metadata_mask;
+};
+
+struct ofp_instruction_write_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_apply_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_clear_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction_experimenter {
+    uint16_t type;		
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_flow_add {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_bucket {
+    uint16_t len;
+    uint16_t weight;
+    of_port_no_t watch_port;
+    uint32_t watch_group;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_group_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t command;
+    uint8_t group_type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+struct ofp_flow_removed {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint16_t priority;
+    uint8_t reason;
+    uint8_t table_id;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t idle_timeout;
+    uint8_t[2] pad2;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+};
+
+struct ofp_error_msg {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t err_type;
+    uint16_t code;
+    of_octets_t data;
+};
+
+// STATS ENTRIES:  flow, table, port, group, group_desc
+
+struct ofp_flow_stats_entry {
+    uint16_t length;
+    uint8_t table_id;
+    uint8_t pad;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t priority;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint8_t[6] pad2;
+    uint64_t cookie;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_table_stats_entry {
+    uint8_t table_id;
+    uint8_t[7] pad;
+    of_table_name_t name;
+    of_wc_bmap_t wildcards;
+    of_match_bmap_t match;
+    uint32_t instructions;
+    uint32_t write_actions;
+    uint32_t apply_actions;
+    uint32_t config;
+    uint32_t max_entries;
+    uint32_t active_count;
+    uint64_t lookup_count;
+    uint64_t matched_count;
+};
+
+struct ofp_port_stats_entry {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    uint64_t rx_packets;
+    uint64_t tx_packets;
+    uint64_t rx_bytes;
+    uint64_t tx_bytes;
+    uint64_t rx_dropped;
+    uint64_t tx_dropped;
+    uint64_t rx_errors;
+    uint64_t tx_errors;
+    uint64_t rx_frame_err;
+    uint64_t rx_over_err;
+    uint64_t rx_crc_err;
+    uint64_t collisions;
+};
+
+struct ofp_queue_stats_entry {
+    of_port_no_t port_no;
+    uint32_t queue_id;
+    uint64_t tx_bytes;
+    uint64_t tx_packets;
+    uint64_t tx_errors;
+};
+
+struct ofp_bucket_counter {
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+
+struct ofp_group_stats_entry {
+    uint16_t length;
+    uint8_t[2] pad;
+    uint32_t group_id;
+    uint32_t ref_count;
+    uint8_t[4] pad2;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    list(of_bucket_counter_t) bucket_stats;
+};
+
+struct ofp_group_desc_stats_entry {
+    uint16_t length;
+    uint8_t type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+// STATS:  Desc, flow, agg, table, port, queue, group, group_desc, experi
+
+struct ofp_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_desc_str_t mfr_desc;
+    of_desc_str_t hw_desc;
+    of_desc_str_t sw_desc;
+    of_serial_num_t serial_num;
+    of_desc_str_t dp_desc;
+};
+
+struct ofp_flow_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_flow_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_flow_stats_entry_t) entries;
+};
+
+struct ofp_aggregate_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_aggregate_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    uint32_t flow_count;
+    uint8_t[4] pad;
+};
+
+struct ofp_table_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_table_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_table_stats_entry_t) entries;
+};
+
+struct ofp_port_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+};
+
+struct ofp_port_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_port_stats_entry_t) entries;
+};
+
+struct ofp_queue_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint32_t queue_id;
+};
+
+struct ofp_queue_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_queue_stats_entry_t) entries;
+};
+
+struct ofp_group_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t group_id;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_stats_entry_t) entries;
+};
+
+struct ofp_group_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_desc_stats_entry_t) entries;
+};
+
+struct ofp_experimenter_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint8_t[4] pad;
+    of_octets_t data;
+};
+
+struct ofp_experimenter_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint8_t[4] pad;
+    of_octets_t data;
+};
+
+// END OF STATS OBJECTS
+
+struct ofp_queue_prop {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_prop_min_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_packet_queue {
+    uint32_t queue_id;
+    uint16_t len;
+    uint8_t[2] pad;
+    list(of_queue_prop_t) properties;
+};
+
+struct ofp_queue_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+    list(of_packet_queue_t) queues;
+};
diff --git a/openflow_input/standard-1.2 b/openflow_input/standard-1.2
new file mode 100644
index 0000000..0961abd
--- /dev/null
+++ b/openflow_input/standard-1.2
@@ -0,0 +1,958 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 3
+
+struct ofp_header {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_hello {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_echo_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_echo_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_experimenter {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+struct ofp_barrier_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_barrier_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_set_config {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_table_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    uint32_t config;
+};
+
+struct ofp_port_desc {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    of_port_name_t name;
+    uint32_t config;
+    uint32_t state;
+    uint32_t curr;
+    uint32_t advertised;
+    uint32_t supported;
+    uint32_t peer;
+    uint32_t curr_speed;
+    uint32_t max_speed;
+};
+
+struct ofp_features_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_features_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t datapath_id;
+    uint32_t n_buffers;
+    uint8_t n_tables;
+    uint8_t[3] pad;
+    uint32_t capabilities;
+    uint32_t reserved;
+    list(of_port_desc_t) ports;
+};
+
+struct ofp_port_status {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t reason;
+    uint8_t[7] pad;
+    of_port_desc_t desc;
+};
+
+struct ofp_port_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    uint32_t config;
+    uint32_t mask;
+    uint32_t advertise;
+    uint8_t[4] pad3;
+};
+
+struct ofp_match_v3 {
+    uint16_t type;
+    uint16_t length;
+    list(of_oxm_t) oxm_list;
+};
+
+struct ofp_oxm_experimenter_header {
+    uint32_t oxm_header;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_action_output {
+    uint16_t type;
+    uint16_t len;
+    of_port_no_t port;
+    uint16_t max_len;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_copy_ttl_out {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_copy_ttl_in {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t mpls_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_queue {
+    uint16_t type;
+    uint16_t len;
+    uint32_t queue_id;
+};
+
+struct ofp_action_group {
+    uint16_t type;
+    uint16_t len;
+    uint32_t group_id;
+};
+
+struct ofp_action_set_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_field {
+    uint16_t type;
+    uint16_t len;
+    of_octets_t field;
+};
+
+struct ofp_action_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_action {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction_goto_table {
+    uint16_t type;
+    uint16_t len;
+    uint8_t table_id;
+    uint8_t[3] pad;
+};
+
+struct ofp_instruction_write_metadata {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint64_t metadata;
+    uint64_t metadata_mask;
+};
+
+struct ofp_instruction_write_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_apply_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_clear_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction_experimenter {
+    uint16_t type;		
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_flow_add {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_bucket {
+    uint16_t len;
+    uint16_t weight;
+    of_port_no_t watch_port;
+    uint32_t watch_group;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_group_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t command;
+    uint8_t group_type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+struct ofp_packet_out {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    of_port_no_t in_port;
+    uint16_t actions_len;
+    uint8_t[6] pad;
+    list(of_action_t) actions;
+    of_octets_t data;
+};
+
+struct ofp_packet_in {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    uint16_t total_len;
+    uint8_t reason;
+    uint8_t table_id;
+    of_match_t match;
+    uint8_t[2] pad;
+    of_octets_t data; /* FIXME: Ensure total_len gets updated */
+};
+
+struct ofp_flow_removed {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint16_t priority;
+    uint8_t reason;
+    uint8_t table_id;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+};
+
+struct ofp_error_msg {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t err_type;
+    uint16_t code;
+    of_octets_t data;
+};
+
+// struct ofp_error_experimenter_msg {
+//    uint8_t version;
+//    uint8_t type;
+//    uint16_t length;
+//    uint32_t xid;
+//    uint16_t err_type;
+//    uint16_t subtype;
+//    uint32_t experimenter;
+//    of_octets_t data;
+//};
+
+// STATS ENTRIES: flow, table, port, queue, group stats, group desc stats
+// FIXME: Verify disambiguation w/ length in object and entry
+
+struct ofp_flow_stats_entry {
+    uint16_t length;
+    uint8_t table_id;
+    uint8_t pad;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t priority;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint8_t[6] pad2;
+    uint64_t cookie;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_table_stats_entry {
+    uint8_t table_id;
+    uint8_t[7] pad;
+    of_table_name_t name;
+    of_match_bmap_t match;
+    of_wc_bmap_t wildcards;
+    uint32_t write_actions;
+    uint32_t apply_actions;
+    uint64_t write_setfields;
+    uint64_t apply_setfields;
+    uint64_t metadata_match;
+    uint64_t metadata_write;
+    uint32_t instructions;
+    uint32_t config;
+    uint32_t max_entries;
+    uint32_t active_count;
+    uint64_t lookup_count;
+    uint64_t matched_count;
+};
+
+struct ofp_port_stats_entry {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    uint64_t rx_packets;
+    uint64_t tx_packets;
+    uint64_t rx_bytes;
+    uint64_t tx_bytes;
+    uint64_t rx_dropped;
+    uint64_t tx_dropped;
+    uint64_t rx_errors;
+    uint64_t tx_errors;
+    uint64_t rx_frame_err;
+    uint64_t rx_over_err;
+    uint64_t rx_crc_err;
+    uint64_t collisions;
+};
+
+struct ofp_queue_stats_entry {
+    of_port_no_t port_no;
+    uint32_t queue_id;
+    uint64_t tx_bytes;
+    uint64_t tx_packets;
+    uint64_t tx_errors;
+};
+
+struct ofp_bucket_counter {
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+
+struct ofp_group_stats_entry {
+    uint16_t length;
+    uint8_t[2] pad;
+    uint32_t group_id;
+    uint32_t ref_count;
+    uint8_t[4] pad2;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    list(of_bucket_counter_t) bucket_stats;
+};
+
+struct ofp_group_desc_stats_entry {
+    uint16_t length;
+    uint8_t type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+// STATS: 
+//  Desc, flow, agg, table, port, queue, group, group_desc, group_feat, experi
+
+struct ofp_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_desc_str_t mfr_desc;
+    of_desc_str_t hw_desc;
+    of_desc_str_t sw_desc;
+    of_serial_num_t serial_num;
+    of_desc_str_t dp_desc;
+};
+
+struct ofp_flow_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_flow_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_flow_stats_entry_t) entries;
+};
+
+struct ofp_aggregate_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_aggregate_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    uint32_t flow_count;
+    uint8_t[4] pad;
+};
+
+struct ofp_table_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_table_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_table_stats_entry_t) entries;
+};
+
+struct ofp_port_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+};
+
+struct ofp_port_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_port_stats_entry_t) entries;
+};
+
+struct ofp_queue_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint32_t queue_id;
+};
+
+struct ofp_queue_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_queue_stats_entry_t) entries;
+};
+
+struct ofp_group_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t group_id;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_stats_entry_t) entries;
+};
+
+struct ofp_group_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_desc_stats_entry_t) entries;
+};
+
+struct ofp_group_features_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_features_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t types;
+    uint32_t capabilities;
+    uint32_t max_groups_all;
+    uint32_t max_groups_select;
+    uint32_t max_groups_indirect;
+    uint32_t max_groups_ff;
+    uint32_t actions_all;
+    uint32_t actions_select;
+    uint32_t actions_indirect;
+    uint32_t actions_ff;
+};
+
+struct ofp_experimenter_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+struct ofp_experimenter_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+// END OF STATS OBJECTS
+
+struct ofp_queue_prop {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_prop_min_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_queue_prop_max_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_queue_prop_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint8_t[4] pad;
+    of_octets_t data;
+};
+
+struct ofp_packet_queue {
+    uint32_t queue_id;
+    of_port_no_t port;
+    uint16_t len;
+    uint8_t[6] pad;
+    list(of_queue_prop_t) properties;
+};
+
+struct ofp_queue_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+    list(of_packet_queue_t) queues;
+};
+
+struct ofp_role_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t role;
+    uint8_t[4] pad;
+    uint64_t generation_id;
+};
+
+struct ofp_role_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
diff --git a/openflow_input/standard-1.3 b/openflow_input/standard-1.3
new file mode 100644
index 0000000..258ec01
--- /dev/null
+++ b/openflow_input/standard-1.3
@@ -0,0 +1,1319 @@
+// Copyright 2013, Big Switch Networks, Inc.
+//
+// LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+// the following special exception:
+//
+// LOXI Exception
+//
+// As a special exception to the terms of the EPL, you may distribute libraries
+// generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+// that copyright and licensing notices generated by LoxiGen are not altered or removed
+// from the LoxiGen Libraries and the notice provided below is (i) included in
+// the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+// documentation for the LoxiGen Libraries, if distributed in binary form.
+//
+// Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+//
+// You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+// a copy of the EPL at:
+//
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// EPL for the specific language governing permissions and limitations
+// under the EPL.
+
+#version 4
+
+struct ofp_header {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+// Special structures used for managing scalar list elements
+struct ofp_uint32 {
+    uint32_t value;
+};
+
+// Special structures used for managing scalar list elements
+struct ofp_uint8 {
+    uint8_t value;
+};
+
+struct ofp_hello_elem {
+    uint16_t type;
+    uint16_t length;
+};
+
+struct ofp_hello_elem_versionbitmap {
+    uint16_t type;
+    uint16_t length;
+    list(of_uint32_t) bitmaps;
+};
+
+struct ofp_hello {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    list(of_hello_elem_t) elements;
+};
+
+struct ofp_echo_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_echo_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+struct ofp_experimenter {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t experimenter;
+    uint32_t subtype;
+    of_octets_t data;
+};
+
+struct ofp_barrier_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_barrier_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_set_config {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t flags;
+    uint16_t miss_send_len;
+};
+
+struct ofp_table_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    uint32_t config;
+};
+
+struct ofp_port_desc {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    of_port_name_t name;
+    uint32_t config;
+    uint32_t state;
+    uint32_t curr;
+    uint32_t advertised;
+    uint32_t supported;
+    uint32_t peer;
+    uint32_t curr_speed;
+    uint32_t max_speed;
+};
+
+struct ofp_features_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+};
+
+struct ofp_features_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t datapath_id;
+    uint32_t n_buffers;
+    uint8_t n_tables;
+    uint8_t auxiliary_id;
+    uint8_t[2] pad;
+    uint32_t capabilities;
+    uint32_t reserved;
+};
+
+struct ofp_port_status {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint8_t reason;
+    uint8_t[7] pad;
+    of_port_desc_t desc;
+};
+
+struct ofp_port_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    of_mac_addr_t hw_addr;
+    uint8_t[2] pad2;
+    uint32_t config;
+    uint32_t mask;
+    uint32_t advertise;
+    uint8_t[4] pad3;
+};
+
+// FIXME Does this need to be v4?
+struct ofp_match_v3 {
+    uint16_t type;
+    uint16_t length;
+    list(of_oxm_t) oxm_list;
+};
+
+struct ofp_oxm_experimenter_header {
+    uint32_t oxm_header;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+// This looks like an action header, but is standalone.  See 
+// ofp_table_features_prop_actions
+struct ofp_action_id {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_output {
+    uint16_t type;
+    uint16_t len;
+    of_port_no_t port;
+    uint16_t max_len;
+    uint8_t[6] pad;
+};
+
+struct ofp_action_copy_ttl_out {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_copy_ttl_in {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t mpls_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_mpls_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_vlan {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_pop_mpls {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action_set_queue {
+    uint16_t type;
+    uint16_t len;
+    uint32_t queue_id;
+};
+
+struct ofp_action_group {
+    uint16_t type;
+    uint16_t len;
+    uint32_t group_id;
+};
+
+struct ofp_action_set_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t nw_ttl;
+    uint8_t[3] pad;
+};
+
+struct ofp_action_dec_nw_ttl {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_set_field {
+    uint16_t type;
+    uint16_t len;
+    of_octets_t field;
+};
+
+struct ofp_action_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_action_pop_pbb {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_action_push_pbb {
+    uint16_t type;
+    uint16_t len;
+    uint16_t ethertype;
+    uint8_t[2] pad;
+};
+
+struct ofp_action {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction {
+    uint16_t type;
+    uint16_t len;
+};
+
+struct ofp_instruction_goto_table {
+    uint16_t type;
+    uint16_t len;
+    uint8_t table_id;
+    uint8_t[3] pad;
+};
+
+struct ofp_instruction_write_metadata {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint64_t metadata;
+    uint64_t metadata_mask;
+};
+
+struct ofp_instruction_write_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_apply_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_instruction_clear_actions {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_instruction_meter {
+    uint16_t type;
+    uint16_t len;
+    uint32_t meter_id;
+};
+
+struct ofp_instruction_experimenter {
+    uint16_t type;		
+    uint16_t len;
+    uint32_t experimenter;
+    of_octets_t data;
+};
+
+struct ofp_flow_add {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_modify_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_flow_delete_strict {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    uint8_t table_id;
+    of_fm_cmd_t _command;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint16_t priority;
+    uint32_t buffer_id;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint16_t flags;
+    uint8_t[2] pad;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+struct ofp_bucket {
+    uint16_t len;
+    uint16_t weight;
+    of_port_no_t watch_port;
+    uint32_t watch_group;
+    uint8_t[4] pad;
+    list(of_action_t) actions;
+};
+
+struct ofp_group_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t command;
+    uint8_t group_type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+struct ofp_packet_out {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    of_port_no_t in_port;
+    uint16_t actions_len;
+    uint8_t[6] pad;
+    list(of_action_t) actions;
+    of_octets_t data;
+};
+
+struct ofp_packet_in {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t buffer_id;
+    uint16_t total_len;
+    uint8_t reason;
+    uint8_t table_id;
+    uint64_t cookie;
+    of_match_t match;
+    uint8_t[2] pad;
+    of_octets_t data; /* FIXME: Ensure total_len gets updated */
+};
+
+struct ofp_flow_removed {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint64_t cookie;
+    uint16_t priority;
+    uint8_t reason;
+    uint8_t table_id;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+};
+
+struct ofp_meter_band {
+    uint16_t        type;
+    uint16_t        len;
+//    uint32_t        rate;  // These are excluded b/c this is the header
+//    uint32_t        burst_size;  // These are excluded b/c this is the header
+};
+
+struct ofp_meter_band_drop {
+    uint16_t        type;
+    uint16_t        len;
+    uint32_t        rate;
+    uint32_t        burst_size;
+    uint8_t[4]      pad;
+};
+
+struct ofp_meter_band_dscp_remark {
+    uint16_t        type;
+    uint16_t        len;
+    uint32_t        rate;
+    uint32_t        burst_size;
+    uint8_t         prec_level;
+    uint8_t[3]      pad;
+};
+
+struct ofp_meter_band_experimenter {
+    uint16_t        type;
+    uint16_t        len;
+    uint32_t        rate;
+    uint32_t        burst_size;
+    uint32_t        experimenter;
+};
+
+struct ofp_meter_mod {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t command;
+    uint16_t flags;
+    uint32_t meter_id;
+    list(of_meter_band_t) meters;
+};
+
+struct ofp_error_msg {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t err_type;
+    uint16_t code;
+    of_octets_t data;
+};
+
+//struct ofp_error_experimenter_msg {
+//    uint8_t version;
+//    uint8_t type;
+//    uint16_t length;
+//    uint32_t xid;
+//    uint16_t err_type;
+//    uint16_t subtype;
+//    uint32_t experimenter;
+//    of_octets_t data;
+//};
+
+// STATS ENTRIES: flow, table, port, queue, group stats, group desc stats
+
+struct ofp_flow_stats_entry {
+    uint16_t length;
+    uint8_t table_id;
+    uint8_t pad;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    uint16_t priority;
+    uint16_t idle_timeout;
+    uint16_t hard_timeout;
+    uint8_t[6] pad2;
+    uint64_t cookie;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    of_match_t match;
+    list(of_instruction_t) instructions;
+};
+
+
+struct ofp_table_stats_entry {
+    uint8_t table_id;
+    uint8_t[3] pad;
+    uint32_t active_count;
+    uint64_t lookup_count;
+    uint64_t matched_count;
+};
+
+struct ofp_port_stats_entry {
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+    uint64_t rx_packets;
+    uint64_t tx_packets;
+    uint64_t rx_bytes;
+    uint64_t tx_bytes;
+    uint64_t rx_dropped;
+    uint64_t tx_dropped;
+    uint64_t rx_errors;
+    uint64_t tx_errors;
+    uint64_t rx_frame_err;
+    uint64_t rx_over_err;
+    uint64_t rx_crc_err;
+    uint64_t collisions;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+};
+
+struct ofp_queue_stats_entry {
+    of_port_no_t port_no;
+    uint32_t queue_id;
+    uint64_t tx_bytes;
+    uint64_t tx_packets;
+    uint64_t tx_errors;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+};
+
+struct ofp_bucket_counter {
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+
+struct ofp_group_stats_entry {
+    uint16_t length;
+    uint8_t[2] pad;
+    uint32_t group_id;
+    uint32_t ref_count;
+    uint8_t[4] pad;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    uint32_t duration_sec;
+    uint32_t duration_nsec;
+    list(of_bucket_counter_t) bucket_stats;
+};
+
+struct ofp_group_desc_stats_entry {
+    uint16_t length;
+    uint8_t type;
+    uint8_t pad;
+    uint32_t group_id;
+    list(of_bucket_t) buckets;
+};
+
+// STATS: 
+//  Desc, flow, agg, table, port, queue, group, group_desc, group_feat, experi
+
+struct ofp_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_desc_str_t mfr_desc;
+    of_desc_str_t hw_desc;
+    of_desc_str_t sw_desc;
+    of_serial_num_t serial_num;
+    of_desc_str_t dp_desc;
+};
+
+struct ofp_flow_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_flow_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_flow_stats_entry_t) entries;
+};
+
+struct ofp_aggregate_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint8_t table_id;
+    uint8_t[3] pad;
+    of_port_no_t out_port;
+    uint32_t out_group;
+    uint8_t[4] pad2;
+    uint64_t cookie;
+    uint64_t cookie_mask;
+    of_match_t match;
+};
+
+struct ofp_aggregate_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint64_t packet_count;
+    uint64_t byte_count;
+    uint32_t flow_count;
+    uint8_t[4] pad;
+};
+
+// FIXME: These are padded to 8 byte align beyond the length indicated
+
+struct ofp_table_feature_prop {
+    uint16_t         type;
+    uint16_t         length;
+};
+
+struct ofp_table_feature_prop_instructions {
+    uint16_t         type;
+    uint16_t         length;
+    // FIXME Check if instruction_t is right for ids here
+    list(of_instruction_t)   instruction_ids;
+};
+
+struct ofp_table_feature_prop_instructions_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_instruction_t)   instruction_ids;
+};
+
+struct ofp_table_feature_prop_next_tables {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint8_t) next_table_ids;
+};
+
+struct ofp_table_feature_prop_next_tables_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint8_t) next_table_ids;
+};
+
+struct ofp_table_feature_prop_write_actions {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_action_id_t) action_ids;
+};
+
+struct ofp_table_feature_prop_write_actions_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_action_id_t) action_ids;
+};
+
+struct ofp_table_feature_prop_apply_actions {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_action_id_t) action_ids;
+};
+
+struct ofp_table_feature_prop_apply_actions_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_action_id_t) action_ids;
+};
+
+struct ofp_table_feature_prop_match {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_wildcards {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_write_setfield {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_write_setfield_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_apply_setfield {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_apply_setfield_miss {
+    uint16_t         type;
+    uint16_t         length;
+    list(of_uint32_t) oxm_ids;
+};
+
+struct ofp_table_feature_prop_experimenter {
+    uint16_t         type;
+    uint16_t         length;
+    uint32_t         experimenter;
+    uint32_t         subtype;
+    of_octets_t      experimenter_data;
+};
+
+// Not yet supported
+// struct ofp_table_feature_prop_experimenter_miss {
+//     uint16_t         type;
+//     uint16_t         length;
+//     uint32_t         experimenter;
+//     uint32_t         subtype;
+//     of_octets_t      experimenter_data;
+// };
+
+struct ofp_table_features {
+    uint16_t length;
+    uint8_t table_id;
+    uint8_t[5] pad;
+    of_table_name_t name;
+    uint64_t metadata_match;
+    uint64_t metadata_write;
+    uint32_t config;
+    uint32_t max_entries;
+    list(of_table_feature_prop_t) properties;
+};
+
+struct ofp_meter_features {
+    uint32_t    max_meter;
+    uint32_t    band_types;
+    uint32_t    capabilities;
+    uint8_t     max_bands;
+    uint8_t     max_color;
+    uint8_t[2]     pad;
+};
+
+struct ofp_port_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint8_t[4] pad;
+};
+
+struct ofp_port_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_port_stats_entry_t) entries;
+};
+
+struct ofp_queue_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_port_no_t port_no;
+    uint32_t queue_id;
+};
+
+struct ofp_queue_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_queue_stats_entry_t) entries;
+};
+
+struct ofp_group_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t group_id;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_stats_entry_t) entries;
+};
+
+struct ofp_group_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_group_desc_stats_entry_t) entries;
+};
+
+struct ofp_group_features_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+struct ofp_group_features_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t types;
+    uint32_t capabilities;
+    uint32_t max_groups_all;
+    uint32_t max_groups_select;
+    uint32_t max_groups_indirect;
+    uint32_t max_groups_ff;
+    uint32_t actions_all;
+    uint32_t actions_select;
+    uint32_t actions_indirect;
+    uint32_t actions_ff;
+};
+
+struct ofp_meter_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t meter_id;
+    uint8_t[4] pad;
+};
+
+struct ofp_meter_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_meter_stats_t) entries;
+};
+
+struct ofp_meter_config_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    uint32_t meter_id;
+    uint8_t[4] pad;
+};
+
+struct ofp_meter_config_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_meter_band_t) entries;
+};
+
+// FIXME stats added to get things working
+struct ofp_meter_features_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+// FIXME stats added to get things working
+struct ofp_meter_features_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    of_meter_features_t features;
+};
+
+// FIXME stats added to get things working
+struct ofp_table_features_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_table_features_t) entries;
+};
+
+// FIXME stats added to get things working
+struct ofp_table_features_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_table_features_t) entries;
+};
+
+// FIXME stats added to get things working
+struct ofp_port_desc_stats_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+};
+
+// FIXME stats added to get things working
+struct ofp_port_desc_stats_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint16_t stats_type;
+    uint16_t flags;
+    uint8_t[4] pad;
+    list(of_port_desc_t) entries;
+};
+
+struct ofp_meter_band_stats {
+    uint64_t        packet_band_count;
+    uint64_t        byte_band_count;
+};
+
+struct ofp_meter_stats {
+    uint32_t        meter_id;
+    uint16_t        len;
+    uint8_t[6]         pad;
+    uint32_t        flow_count;
+    uint64_t        packet_in_count;
+    uint64_t        byte_in_count;
+    uint32_t   duration_sec;
+    uint32_t   duration_nsec;
+    list(of_meter_band_stats_t) band_stats;
+};
+
+struct ofp_meter_config {
+    uint16_t        length;
+    uint16_t        flags;
+    uint32_t        meter_id;
+    list(of_meter_band_t) entries;
+};
+
+struct ofp_experimenter_multipart_header {
+    uint32_t experimenter;
+    uint32_t subtype;
+};
+
+// END OF STATS OBJECTS
+
+struct ofp_queue_prop {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_prop_min_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_queue_prop_max_rate {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint16_t rate;
+    uint8_t[6] pad;
+};
+
+struct ofp_queue_prop_experimenter {
+    uint16_t type;
+    uint16_t len;
+    uint8_t[4] pad;
+    uint32_t experimenter;
+    uint8_t[4] pad;
+    of_octets_t data;
+};
+
+struct ofp_packet_queue {
+    uint32_t queue_id;
+    of_port_no_t port;
+    uint16_t len;
+    uint8_t[6] pad;
+    list(of_queue_prop_t) properties;
+};
+
+struct ofp_queue_get_config_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+};
+
+struct ofp_queue_get_config_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_port_no_t port;
+    uint8_t[4] pad;
+    list(of_packet_queue_t) queues;
+};
+
+struct ofp_role_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t role;
+    uint8_t[4] pad;
+    uint64_t generation_id;
+};
+
+struct ofp_role_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    of_octets_t data;
+};
+
+////////////////////////////////////////////////////////////////
+// FIXME understand async; where do bitmasks live?
+// Determine bitmap type for masks below.
+// DOCUMENT masks where uint32_t[0] is interest for equal/master
+//   while uint32_t[1] is interest for slave
+////////////////////////////////////////////////////////////////
+
+struct ofp_async_get_request {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t packet_in_mask_equal_master;
+    uint32_t packet_in_mask_slave;
+    uint32_t port_status_mask_equal_master;
+    uint32_t port_status_mask_slave;
+    uint32_t flow_removed_mask_equal_master;
+    uint32_t flow_removed_mask_slave;
+};
+
+struct ofp_async_get_reply {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t packet_in_mask_equal_master;
+    uint32_t packet_in_mask_slave;
+    uint32_t port_status_mask_equal_master;
+    uint32_t port_status_mask_slave;
+    uint32_t flow_removed_mask_equal_master;
+    uint32_t flow_removed_mask_slave;
+};
+
+struct ofp_async_set {
+    uint8_t version;
+    uint8_t type;
+    uint16_t length;
+    uint32_t xid;
+    uint32_t packet_in_mask_equal_master;
+    uint32_t packet_in_mask_slave;
+    uint32_t port_status_mask_equal_master;
+    uint32_t port_status_mask_slave;
+    uint32_t flow_removed_mask_equal_master;
+    uint32_t flow_removed_mask_slave;
+};
diff --git a/py_gen/__init__.py b/py_gen/__init__.py
new file mode 100644
index 0000000..5e4e379
--- /dev/null
+++ b/py_gen/__init__.py
@@ -0,0 +1,26 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
diff --git a/py_gen/codegen.py b/py_gen/codegen.py
new file mode 100644
index 0000000..211c71e
--- /dev/null
+++ b/py_gen/codegen.py
@@ -0,0 +1,164 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+from collections import namedtuple
+import of_g
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as utils
+import util
+import oftype
+
+OFClass = namedtuple('OFClass', ['name', 'pyname',
+                                 'members', 'length_member', 'type_members',
+                                 'min_length', 'is_fixed_length'])
+Member = namedtuple('Member', ['name', 'oftype', 'offset', 'skip'])
+LengthMember = namedtuple('LengthMember', ['name', 'oftype', 'offset'])
+TypeMember = namedtuple('TypeMember', ['name', 'oftype', 'offset', 'value'])
+
+def get_type_values(cls, version):
+    """
+    Returns a map from the name of the type member to its value.
+    """
+    type_values = {}
+
+    # Primary wire type
+    if utils.class_is_message(cls):
+        type_values['version'] = 'const.OFP_VERSION'
+        type_values['type'] = util.constant_for_value(version, "ofp_type", util.primary_wire_type(cls, version))
+        if cls in type_maps.flow_mod_list:
+            type_values['_command'] = util.constant_for_value(version, "ofp_flow_mod_command",
+                                                              type_maps.flow_mod_types[version][cls[8:]])
+        if cls in type_maps.stats_request_list:
+            type_values['stats_type'] = util.constant_for_value(version, "ofp_stats_types",
+                                                                type_maps.stats_types[version][cls[3:-14]])
+        if cls in type_maps.stats_reply_list:
+            type_values['stats_type'] = util.constant_for_value(version, "ofp_stats_types",
+                                                                type_maps.stats_types[version][cls[3:-12]])
+        if type_maps.message_is_extension(cls, version):
+            type_values['experimenter'] = '%#x' % type_maps.extension_to_experimenter_id(cls)
+            type_values['subtype'] = type_maps.extension_message_to_subtype(cls, version)
+    elif utils.class_is_action(cls):
+        type_values['type'] = util.constant_for_value(version, "ofp_action_type", util.primary_wire_type(cls, version))
+        if type_maps.action_is_extension(cls, version):
+            type_values['experimenter'] = '%#x' % type_maps.extension_to_experimenter_id(cls)
+            type_values['subtype'] = type_maps.extension_action_to_subtype(cls, version)
+    elif utils.class_is_queue_prop(cls):
+        type_values['type'] = util.constant_for_value(version, "ofp_queue_properties", util.primary_wire_type(cls, version))
+
+    return type_values
+
+# Create intermediate representation
+def build_ofclasses(version):
+    blacklist = ["of_action", "of_action_header", "of_header", "of_queue_prop",
+                 "of_queue_prop_header", "of_experimenter", "of_action_experimenter"]
+    ofclasses = []
+    for cls in of_g.standard_class_order:
+        if version not in of_g.unified[cls] or cls in blacklist:
+            continue
+        unified_class = util.lookup_unified_class(cls, version)
+
+        # Name for the generated Python class
+        if utils.class_is_action(cls):
+            pyname = cls[10:]
+        else:
+            pyname = cls[3:]
+
+        type_values = get_type_values(cls, version)
+        members = []
+
+        length_member = None
+        type_members = []
+
+        for member in unified_class['members']:
+            if member['name'] in ['length', 'len']:
+                length_member = LengthMember(name=member['name'],
+                                             offset=member['offset'],
+                                             oftype=oftype.OFType(member['m_type'], version))
+            elif member['name'] in type_values:
+                type_members.append(TypeMember(name=member['name'],
+                                               offset=member['offset'],
+                                               oftype=oftype.OFType(member['m_type'], version),
+                                               value=type_values[member['name']]))
+            else:
+                # HACK ensure member names are unique
+                if member['name'] == "pad" and \
+                [x for x in members if x.name == 'pad']:
+                    m_name = "pad2"
+                else:
+                    m_name = member['name']
+                members.append(Member(name=m_name,
+                                      oftype=oftype.OFType(member['m_type'], version),
+                                      offset=member['offset'],
+                                      skip=member['name'] in of_g.skip_members))
+
+        ofclasses.append(
+            OFClass(name=cls,
+                    pyname=pyname,
+                    members=members,
+                    length_member=length_member,
+                    type_members=type_members,
+                    min_length=of_g.base_length[(cls, version)],
+                    is_fixed_length=(cls, version) in of_g.is_fixed_length))
+    return ofclasses
+
+def generate_init(out, name, version):
+    util.render_template(out, 'init.py')
+
+def generate_action(out, name, version):
+    ofclasses = [x for x in build_ofclasses(version)
+                 if utils.class_is_action(x.name)]
+    util.render_template(out, 'action.py', ofclasses=ofclasses)
+
+def generate_common(out, name, version):
+    ofclasses = [x for x in build_ofclasses(version)
+                 if not utils.class_is_message(x.name)
+                    and not utils.class_is_action(x.name)
+                    and not utils.class_is_list(x.name)]
+    util.render_template(out, 'common.py', ofclasses=ofclasses)
+
+def generate_const(out, name, version):
+    groups = {}
+    for (group, idents) in of_g.identifiers_by_group.items():
+        items = []
+        for ident in idents:
+            info = of_g.identifiers[ident]
+            if version in info["values_by_version"]:
+                items.append((info["ofp_name"], info["values_by_version"][version]))
+        if items:
+            groups[group] = items
+    util.render_template(out, 'const.py', version=version, groups=groups)
+
+def generate_message(out, name, version):
+    ofclasses = [x for x in build_ofclasses(version)
+                 if utils.class_is_message(x.name)]
+    util.render_template(out, 'message.py', ofclasses=ofclasses, version=version)
+
+def generate_pp(out, name, version):
+    util.render_template(out, 'pp.py')
+
+def generate_util(out, name, version):
+    util.render_template(out, 'util.py')
diff --git a/py_gen/oftype.py b/py_gen/oftype.py
new file mode 100644
index 0000000..ee5040c
--- /dev/null
+++ b/py_gen/oftype.py
@@ -0,0 +1,180 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+import of_g
+import loxi_utils.loxi_utils as utils
+import unittest
+
+class OFType(object):
+    """
+    Encapsulates knowledge about the OpenFlow type system.
+    """
+
+    version = None
+    base = None
+    is_array = False
+    array_length = None
+
+    def __init__(self, string, version):
+        self.version = version
+        self.array_length, self.base = utils.type_dec_to_count_base(string)
+        self.is_array = self.array_length != 1
+
+    def gen_init_expr(self):
+        if utils.class_is_list(self.base):
+            v = "[]"
+        elif self.base.find("uint") == 0 or self.base in ["char", "of_port_no_t"]:
+            v = "0"
+        elif self.base == 'of_mac_addr_t':
+            v = '[0,0,0,0,0,0]'
+        elif self.base == 'of_wc_bmap_t':
+            v = 'const.OFPFW_ALL'
+        elif self.base in ['of_octets_t', 'of_port_name_t', 'of_table_name_t',
+                           'of_desc_str_t', 'of_serial_num_t']:
+            v = '""'
+        elif self.base == 'of_match_t':
+            v = 'common.match()'
+        elif self.base == 'of_port_desc_t':
+            v = 'common.port_desc()'
+        else:
+            v = "None"
+
+        if self.is_array:
+            return "[" + ','.join([v] * self.array_length) + "]"
+        else:
+            return v
+
+    def gen_pack_expr(self, expr_expr):
+        pack_fmt = self._pack_fmt()
+        if pack_fmt and not self.is_array:
+            return 'struct.pack("!%s", %s)' % (pack_fmt, expr_expr)
+        elif pack_fmt and self.is_array:
+            return 'struct.pack("!%s%s", *%s)' % (self.array_length, pack_fmt, expr_expr)
+        elif self.base == 'of_octets_t':
+            return expr_expr
+        elif utils.class_is_list(self.base):
+            return '"".join([x.pack() for x in %s])' % expr_expr
+        elif self.base == 'of_mac_addr_t':
+            return 'struct.pack("!6B", *%s)' % expr_expr
+        elif self.base in ['of_match_t', 'of_port_desc_t']:
+            return '%s.pack()' % expr_expr
+        elif self.base == 'of_port_name_t':
+            return self._gen_string_pack_expr(16, expr_expr)
+        elif self.base == 'of_table_name_t' or self.base == 'of_serial_num_t':
+            return self._gen_string_pack_expr(32, expr_expr)
+        elif self.base == 'of_desc_str_t':
+            return self._gen_string_pack_expr(256, expr_expr)
+        else:
+            return "'TODO pack %s'" % self.base
+
+    def _gen_string_pack_expr(self, length, expr_expr):
+        return 'struct.pack("!%ds", %s)' % (length, expr_expr)
+
+    def gen_unpack_expr(self, buf_expr, offset_expr):
+        pack_fmt = self._pack_fmt()
+        if pack_fmt and not self.is_array:
+            return "struct.unpack_from('!%s', %s, %s)[0]" % (pack_fmt, buf_expr, offset_expr)
+        elif pack_fmt and self.is_array:
+            return "list(struct.unpack_from('!%d%s', %s, %s))" % (self.array_length, pack_fmt, buf_expr, offset_expr)
+        elif self.base == 'of_octets_t':
+            return "%s[%s:]" % (buf_expr, offset_expr)
+        elif self.base == 'of_mac_addr_t':
+            return "list(struct.unpack_from('!6B', %s, %s))" % (buf_expr, offset_expr)
+        elif self.base == 'of_match_t':
+            return 'common.match.unpack(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_port_desc_t':
+            return 'common.port_desc.unpack(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_list_action_t':
+            return 'action.unpack_list(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_list_flow_stats_entry_t':
+            return 'common.unpack_list_flow_stats_entry(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_list_queue_prop_t':
+            return 'common.unpack_list_queue_prop(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_list_packet_queue_t':
+            return 'common.unpack_list_packet_queue(buffer(%s, %s))' % (buf_expr, offset_expr)
+        elif self.base == 'of_port_name_t':
+            return self._gen_string_unpack_expr(16, buf_expr, offset_expr)
+        elif self.base == 'of_table_name_t' or self.base == 'of_serial_num_t':
+            return self._gen_string_unpack_expr(32, buf_expr, offset_expr)
+        elif self.base == 'of_desc_str_t':
+            return self._gen_string_unpack_expr(256, buf_expr, offset_expr)
+        elif utils.class_is_list(self.base):
+            element_cls = utils.list_to_entry_type(self.base)[:-2]
+            if ((element_cls, self.version) in of_g.is_fixed_length):
+                klass_name = self.base[8:-2]
+                element_size, = of_g.base_length[(element_cls, self.version)],
+                return 'util.unpack_array(common.%s.unpack, %d, buffer(%s, %s))' % (klass_name, element_size, buf_expr, offset_expr)
+            else:
+                return "None # TODO unpack list %s" % self.base
+        else:
+            return "None # TODO unpack %s" % self.base
+
+    def _gen_string_unpack_expr(self, length, buf_expr, offset_expr):
+        return 'str(buffer(%s, %s, %d)).rstrip("\\x00")' % (buf_expr, offset_expr, length)
+
+    def _pack_fmt(self):
+        if self.base == "char":
+            return "B"
+        if self.base == "uint8_t":
+            return "B"
+        if self.base == "uint16_t":
+            return "H"
+        if self.base == "uint32_t":
+            return "L"
+        if self.base == "uint64_t":
+            return "Q"
+        if self.base == "of_port_no_t":
+            if self.version == of_g.VERSION_1_0:
+                return "H"
+            else:
+                return "L"
+        if self.base == "of_fm_cmd_t":
+            if self.version == of_g.VERSION_1_0:
+                return "H"
+            else:
+                return "B"
+        if self.base in ["of_wc_bmap_t", "of_match_bmap_t"]:
+            if self.version in [of_g.VERSION_1_0, of_g.VERSION_1_1]:
+                return "L"
+            else:
+                return "Q"
+        return None
+
+class TestOFType(unittest.TestCase):
+    def test_init(self):
+        from oftype import OFType
+        self.assertEquals("None", OFType("of_list_action_t", 1).gen_init_expr())
+        self.assertEquals("[0,0,0]", OFType("uint32_t[3]", 1).gen_init_expr())
+
+    def test_pack(self):
+        self.assertEquals('struct.pack("!16s", "foo")', OFType("of_port_name_t", 1).gen_pack_expr('"foo"'))
+
+    def test_unpack(self):
+        self.assertEquals('str(buffer(buf, 8, 16)).rstrip("\\x00")', OFType("of_port_name_t", 1).gen_unpack_expr('buf', 8))
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/py_gen/templates/.gitignore b/py_gen/templates/.gitignore
new file mode 100644
index 0000000..81fc9b5
--- /dev/null
+++ b/py_gen/templates/.gitignore
@@ -0,0 +1,2 @@
+# Tenjin cache files
+/*.cache
diff --git a/py_gen/templates/_autogen.py b/py_gen/templates/_autogen.py
new file mode 100644
index 0000000..520ad9e
--- /dev/null
+++ b/py_gen/templates/_autogen.py
@@ -0,0 +1,30 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: import inspect, os
+# Automatically generated by LOXI from template #{os.path.basename(inspect.stack()[3][1])}
+# Do not modify
diff --git a/py_gen/templates/_copyright.py b/py_gen/templates/_copyright.py
new file mode 100644
index 0000000..24dc1b6
--- /dev/null
+++ b/py_gen/templates/_copyright.py
@@ -0,0 +1,30 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+# Copyright (c) 2008 The Board of Trustees of The Leland Stanford Junior University
+# Copyright (c) 2011, 2012 Open Networking Foundation
+# Copyright (c) 2012, 2013 Big Switch Networks, Inc.
diff --git a/py_gen/templates/_pack.py b/py_gen/templates/_pack.py
new file mode 100644
index 0000000..0d50a38
--- /dev/null
+++ b/py_gen/templates/_pack.py
@@ -0,0 +1,47 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: # TODO coalesce format strings
+:: all_members = ofclass.members[:]
+:: if ofclass.length_member: all_members.append(ofclass.length_member)
+:: all_members.extend(ofclass.type_members)
+:: all_members.sort(key=lambda x: x.offset)
+:: length_member_index = None
+:: index = 0
+:: for m in all_members:
+::     if m == ofclass.length_member:
+::         length_member_index = index
+        packed.append(${m.oftype.gen_pack_expr('0')}) # placeholder for ${m.name} at index ${length_member_index}
+::     else:
+        packed.append(${m.oftype.gen_pack_expr('self.' + m.name)})
+::     #endif
+::     index += 1
+:: #endfor
+:: if length_member_index != None:
+        length = sum([len(x) for x in packed])
+        packed[${length_member_index}] = ${ofclass.length_member.oftype.gen_pack_expr('length')}
+:: #endif
diff --git a/py_gen/templates/_pack_packet_out.py b/py_gen/templates/_pack_packet_out.py
new file mode 100644
index 0000000..ad8b827
--- /dev/null
+++ b/py_gen/templates/_pack_packet_out.py
@@ -0,0 +1,39 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+        packed.append(struct.pack("!B", self.version))
+        packed.append(struct.pack("!B", self.type))
+        packed.append(struct.pack("!H", 0)) # placeholder for length at index 3
+        packed.append(struct.pack("!L", self.xid))
+        packed.append(struct.pack("!L", self.buffer_id))
+        packed.append(struct.pack("!H", self.in_port))
+        packed_actions = "".join([x.pack() for x in self.actions])
+        packed.append(struct.pack("!H", len(packed_actions)))
+        packed.append(packed_actions)
+        packed.append(self.data)
+        length = sum([len(x) for x in packed])
+        packed[2] = struct.pack("!H", length)
diff --git a/py_gen/templates/_pretty_print.py b/py_gen/templates/_pretty_print.py
new file mode 100644
index 0000000..604cd94
--- /dev/null
+++ b/py_gen/templates/_pretty_print.py
@@ -0,0 +1,61 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+        q.text("${ofclass.pyname} {")
+        with q.group():
+            with q.indent(2):
+                q.breakable()
+:: first = True
+:: for m in ofclass.members:
+:: if m.name == 'actions_len': continue
+:: if not first:
+                q.text(","); q.breakable()
+:: else:
+:: first = False
+:: #endif
+                q.text("${m.name} = ");
+:: if m.name == "xid":
+                if self.${m.name} != None:
+                    q.text("%#x" % self.${m.name})
+                else:
+                    q.text('None')
+:: elif m.oftype.base == 'of_mac_addr_t':
+                q.text(util.pretty_mac(self.${m.name}))
+:: elif m.oftype.base == 'uint32_t' and m.name.startswith("ipv4"):
+                q.text(util.pretty_ipv4(self.${m.name}))
+:: elif m.oftype.base == 'of_wc_bmap_t':
+                q.text(util.pretty_wildcards(self.${m.name}))
+:: elif m.oftype.base == 'of_port_no_t':
+                q.text(util.pretty_port(self.${m.name}))
+:: elif m.oftype.base.startswith("uint") and not m.oftype.is_array:
+                q.text("%#x" % self.${m.name})
+:: else:
+                q.pp(self.${m.name})
+:: #endif
+:: #endfor
+            q.breakable()
+        q.text('}')
diff --git a/py_gen/templates/_unpack.py b/py_gen/templates/_unpack.py
new file mode 100644
index 0000000..173ebb5
--- /dev/null
+++ b/py_gen/templates/_unpack.py
@@ -0,0 +1,49 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: # TODO coalesce format strings
+:: all_members = ofclass.members[:]
+:: if ofclass.length_member: all_members.append(ofclass.length_member)
+:: all_members.extend(ofclass.type_members)
+:: all_members.sort(key=lambda x: x.offset)
+:: for m in all_members:
+::     unpack_expr = m.oftype.gen_unpack_expr('buf', m.offset)
+::     if m == ofclass.length_member:
+        _length = ${unpack_expr}
+        assert(_length == len(buf))
+:: if ofclass.is_fixed_length:
+        if _length != ${ofclass.min_length}: raise loxi.ProtocolError("${ofclass.pyname} length is %d, should be ${ofclass.min_length}" % _length)
+:: else:
+        if _length < ${ofclass.min_length}: raise loxi.ProtocolError("${ofclass.pyname} length is %d, should be at least ${ofclass.min_length}" % _length)
+:: #endif
+::     elif m in ofclass.type_members:
+        ${m.name} = ${unpack_expr}
+        assert(${m.name} == ${m.value})
+::     else:
+        obj.${m.name} = ${unpack_expr}
+::     #endif
+:: #endfor
diff --git a/py_gen/templates/_unpack_packet_out.py b/py_gen/templates/_unpack_packet_out.py
new file mode 100644
index 0000000..b97e829
--- /dev/null
+++ b/py_gen/templates/_unpack_packet_out.py
@@ -0,0 +1,40 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+        version = struct.unpack_from('!B', buf, 0)[0]
+        assert(version == const.OFP_VERSION)
+        type = struct.unpack_from('!B', buf, 1)[0]
+        assert(type == const.OFPT_PACKET_OUT)
+        _length = struct.unpack_from('!H', buf, 2)[0]
+        assert(_length == len(buf))
+        if _length < 16: raise loxi.ProtocolError("packet_out length is %d, should be at least 16" % _length)
+        obj.xid = struct.unpack_from('!L', buf, 4)[0]
+        obj.buffer_id = struct.unpack_from('!L', buf, 8)[0]
+        obj.in_port = struct.unpack_from('!H', buf, 12)[0]
+        actions_len = struct.unpack_from('!H', buf, 14)[0]
+        obj.actions = action.unpack_list(buffer(buf, 16, actions_len))
+        obj.data = str(buffer(buf, 16+actions_len))
diff --git a/py_gen/templates/action.py b/py_gen/templates/action.py
new file mode 100644
index 0000000..439e56f
--- /dev/null
+++ b/py_gen/templates/action.py
@@ -0,0 +1,145 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: import itertools
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+import struct
+import const
+import util
+import loxi
+
+def unpack_list(buf):
+    if len(buf) % 8 != 0: raise loxi.ProtocolError("action list length not a multiple of 8")
+    actions = []
+    offset = 0
+    while offset < len(buf):
+        type, length = struct.unpack_from("!HH", buf, offset)
+        if length == 0: raise loxi.ProtocolError("action length is 0")
+        if length % 8 != 0: raise loxi.ProtocolError("action length not a multiple of 8")
+        if offset + length > len(buf): raise loxi.ProtocolError("action length overruns list length")
+        parser = parsers.get(type)
+        if not parser: raise loxi.ProtocolError("unknown action type %d" % type)
+        actions.append(parser(buffer(buf, offset, length)))
+        offset += length
+    return actions
+
+class Action(object):
+    type = None # override in subclass
+    pass
+
+:: for ofclass in ofclasses:
+:: nonskip_members = [m for m in ofclass.members if not m.skip]
+class ${ofclass.pyname}(Action):
+:: for m in ofclass.type_members:
+    ${m.name} = ${m.value}
+:: #endfor
+
+    def __init__(self, ${', '.join(["%s=None" % m.name for m in nonskip_members])}):
+:: for m in nonskip_members:
+        if ${m.name} != None:
+            self.${m.name} = ${m.name}
+        else:
+            self.${m.name} = ${m.oftype.gen_init_expr()}
+:: #endfor
+
+    def pack(self):
+        packed = []
+:: include("_pack.py", ofclass=ofclass)
+        return ''.join(packed)
+
+    @staticmethod
+    def unpack(buf):
+        obj = ${ofclass.pyname}()
+:: include("_unpack.py", ofclass=ofclass)
+        return obj
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        if self.type != other.type: return False
+:: for m in nonskip_members:
+        if self.${m.name} != other.${m.name}: return False
+:: #endfor
+        return True
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def show(self):
+        import loxi.pp
+        return loxi.pp.pp(self)
+
+    def pretty_print(self, q):
+:: include('_pretty_print.py', ofclass=ofclass)
+
+:: #endfor
+
+def parse_vendor(buf):
+    if len(buf) < 16:
+        raise loxi.ProtocolError("experimenter action too short")
+
+    experimenter, = struct.unpack_from("!L", buf, 4)
+    if experimenter == 0x005c16c7: # Big Switch Networks
+        subtype, = struct.unpack_from("!L", buf, 8)
+    elif experimenter == 0x00002320: # Nicira
+        subtype, = struct.unpack_from("!H", buf, 8)
+    else:
+        raise loxi.ProtocolError("unexpected experimenter id %#x" % experimenter)
+
+    if subtype in experimenter_parsers[experimenter]:
+        return experimenter_parsers[experimenter][subtype](buf)
+    else:
+        raise loxi.ProtocolError("unexpected BSN experimenter subtype %#x" % subtype)
+
+parsers = {
+:: sort_key = lambda x: x.type_members[0].value
+:: msgtype_groups = itertools.groupby(sorted(ofclasses, key=sort_key), sort_key)
+:: for (k, v) in msgtype_groups:
+:: v = list(v)
+:: if len(v) == 1:
+    ${k} : ${v[0].pyname}.unpack,
+:: else:
+    ${k} : parse_${k[12:].lower()},
+:: #endif
+:: #endfor
+}
+
+:: experimenter_ofclasses = [x for x in ofclasses if x.type_members[0].value == 'const.OFPAT_VENDOR']
+:: sort_key = lambda x: x.type_members[1].value
+:: experimenter_ofclasses.sort(key=sort_key)
+:: grouped = itertools.groupby(experimenter_ofclasses, sort_key)
+experimenter_parsers = {
+:: for (experimenter, v) in grouped:
+    ${experimenter} : {
+:: for ofclass in v:
+        ${ofclass.type_members[2].value}: ${ofclass.pyname}.unpack,
+:: #endfor
+    },
+:: #endfor
+}
diff --git a/py_gen/templates/common.py b/py_gen/templates/common.py
new file mode 100644
index 0000000..08fb65d
--- /dev/null
+++ b/py_gen/templates/common.py
@@ -0,0 +1,125 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+import sys
+import struct
+import action
+import const
+import util
+
+# HACK make this module visible as 'common' to simplify code generation
+common = sys.modules[__name__]
+
+def unpack_list_flow_stats_entry(buf):
+    entries = []
+    offset = 0
+    while offset < len(buf):
+        length, = struct.unpack_from("!H", buf, offset)
+        if length == 0: raise loxi.ProtocolError("entry length is 0")
+        if offset + length > len(buf): raise loxi.ProtocolError("entry length overruns list length")
+        entries.append(flow_stats_entry.unpack(buffer(buf, offset, length)))
+        offset += length
+    return entries
+
+def unpack_list_queue_prop(buf):
+    entries = []
+    offset = 0
+    while offset < len(buf):
+        type, length, = struct.unpack_from("!HH", buf, offset)
+        if length == 0: raise loxi.ProtocolError("entry length is 0")
+        if offset + length > len(buf): raise loxi.ProtocolError("entry length overruns list length")
+        if type == const.OFPQT_MIN_RATE:
+            entry = queue_prop_min_rate.unpack(buffer(buf, offset, length))
+        else:
+            raise loxi.ProtocolError("unknown queue prop %d" % type)
+        entries.append(entry)
+        offset += length
+    return entries
+
+def unpack_list_packet_queue(buf):
+    entries = []
+    offset = 0
+    while offset < len(buf):
+        _, length, = struct.unpack_from("!LH", buf, offset)
+        if length == 0: raise loxi.ProtocolError("entry length is 0")
+        if offset + length > len(buf): raise loxi.ProtocolError("entry length overruns list length")
+        entries.append(packet_queue.unpack(buffer(buf, offset, length)))
+        offset += length
+    return entries
+
+:: for ofclass in ofclasses:
+class ${ofclass.pyname}(object):
+:: for m in ofclass.type_members:
+    ${m.name} = ${m.value}
+:: #endfor
+
+    def __init__(self, ${', '.join(["%s=None" % m.name for m in ofclass.members])}):
+:: for m in ofclass.members:
+        if ${m.name} != None:
+            self.${m.name} = ${m.name}
+        else:
+            self.${m.name} = ${m.oftype.gen_init_expr()}
+:: #endfor
+
+    def pack(self):
+        packed = []
+:: include("_pack.py", ofclass=ofclass)
+        return ''.join(packed)
+
+    @staticmethod
+    def unpack(buf):
+        assert(len(buf) >= ${ofclass.min_length}) # Should be verified by caller
+        obj = ${ofclass.pyname}()
+:: include("_unpack.py", ofclass=ofclass)
+        return obj
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+:: for m in ofclass.members:
+        if self.${m.name} != other.${m.name}: return False
+:: #endfor
+        return True
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def show(self):
+        import loxi.pp
+        return loxi.pp.pp(self)
+
+    def pretty_print(self, q):
+:: include('_pretty_print.py', ofclass=ofclass)
+
+:: if ofclass.name.startswith("of_match_v"):
+match = ${ofclass.pyname}
+
+:: #endif
+:: #endfor
diff --git a/py_gen/templates/const.py b/py_gen/templates/const.py
new file mode 100644
index 0000000..c5a0e93
--- /dev/null
+++ b/py_gen/templates/const.py
@@ -0,0 +1,66 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: blacklisted_map_groups = ['macro_definitions']
+:: blacklisted_map_idents = ['OFPFW_NW_DST_BITS', 'OFPFW_NW_SRC_BITS',
+::     'OFPFW_NW_SRC_SHIFT', 'OFPFW_NW_DST_SHIFT', 'OFPFW_NW_SRC_ALL',
+::     'OFPFW_NW_SRC_MASK', 'OFPFW_NW_DST_ALL', 'OFPFW_NW_DST_MASK',
+::     'OFPFW_ALL']
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+OFP_VERSION = ${version}
+
+:: for (group, idents) in sorted(groups.items()):
+::    idents.sort(key=lambda (ident, value): eval(value))
+# Identifiers from group ${group}
+::    for (ident, value) in idents:
+::        if version == 1 and ident.startswith('OFPP_'):
+::        # HACK loxi converts these to 32-bit
+${ident} = ${"%#x" % (int(value, 16) & 0xffff)}
+::        else:
+${ident} = ${value}
+::        #endif
+::    #endfor
+
+::    if group not in blacklisted_map_groups:
+${group}_map = {
+::        for (ident, value) in idents:
+::            if ident in blacklisted_map_idents:
+::                pass
+::            elif version == 1 and ident.startswith('OFPP_'):
+::                # HACK loxi converts these to 32-bit
+    ${"%#x" % (int(value, 16) & 0xffff)}: ${repr(ident)},
+::        else:
+    ${value}: ${repr(ident)},
+::            #endif
+::        #endfor
+}
+
+::     #endif
+:: #endfor
diff --git a/py_gen/templates/init.py b/py_gen/templates/init.py
new file mode 100644
index 0000000..abc8d70
--- /dev/null
+++ b/py_gen/templates/init.py
@@ -0,0 +1,35 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+import action, common, const, message
+from const import *
+from common import *
+from loxi import ProtocolError
diff --git a/py_gen/templates/message.py b/py_gen/templates/message.py
new file mode 100644
index 0000000..71a2871
--- /dev/null
+++ b/py_gen/templates/message.py
@@ -0,0 +1,219 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http::: #www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: import itertools
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+import struct
+import loxi
+import const
+import common
+import action # for unpack_list
+import util
+
+class Message(object):
+    version = const.OFP_VERSION
+    type = None # override in subclass
+    xid = None
+
+:: for ofclass in ofclasses:
+:: nonskip_members = [m for m in ofclass.members if not m.skip]
+class ${ofclass.pyname}(Message):
+:: for m in ofclass.type_members:
+    ${m.name} = ${m.value}
+:: #endfor
+
+    def __init__(self, ${', '.join(["%s=None" % m.name for m in nonskip_members])}):
+        self.xid = xid
+:: for m in [x for x in nonskip_members if x.name != 'xid']:
+        if ${m.name} != None:
+            self.${m.name} = ${m.name}
+        else:
+            self.${m.name} = ${m.oftype.gen_init_expr()}
+:: #endfor
+
+    def pack(self):
+        packed = []
+:: if ofclass.name == 'of_packet_out':
+:: include('_pack_packet_out.py', ofclass=ofclass)
+:: else:
+:: include('_pack.py', ofclass=ofclass)
+:: #endif
+        return ''.join(packed)
+
+    @staticmethod
+    def unpack(buf):
+        if len(buf) < 8: raise loxi.ProtocolError("buffer too short to contain an OpenFlow message")
+        obj = ${ofclass.pyname}()
+:: if ofclass.name == 'of_packet_out':
+:: include('_unpack_packet_out.py', ofclass=ofclass)
+:: else:
+:: include('_unpack.py', ofclass=ofclass)
+:: #endif
+        return obj
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        if self.version != other.version: return False
+        if self.type != other.type: return False
+:: for m in nonskip_members:
+        if self.${m.name} != other.${m.name}: return False
+:: #endfor
+        return True
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __str__(self):
+        return self.show()
+
+    def show(self):
+        import loxi.pp
+        return loxi.pp.pp(self)
+
+    def pretty_print(self, q):
+:: include('_pretty_print.py', ofclass=ofclass)
+
+:: #endfor
+
+def parse_header(buf):
+    if len(buf) < 8:
+        raise loxi.ProtocolError("too short to be an OpenFlow message")
+    return struct.unpack_from("!BBHL", buf)
+
+def parse_message(buf):
+    msg_ver, msg_type, msg_len, msg_xid = parse_header(buf)
+    if msg_ver != const.OFP_VERSION and msg_type != ofp.OFPT_HELLO:
+        raise loxi.ProtocolError("wrong OpenFlow version")
+    if len(buf) != msg_len:
+        raise loxi.ProtocolError("incorrect message size")
+    if msg_type in parsers:
+        return parsers[msg_type](buf)
+    else:
+        raise loxi.ProtocolError("unexpected message type")
+
+:: # TODO fix for OF 1.1+
+def parse_flow_mod(buf):
+    if len(buf) < 56 + 2:
+        raise loxi.ProtocolError("message too short")
+    cmd, = struct.unpack_from("!H", buf, 56)
+    if cmd in flow_mod_parsers:
+        return flow_mod_parsers[cmd](buf)
+    else:
+        raise loxi.ProtocolError("unexpected flow mod cmd %u" % cmd)
+
+def parse_stats_reply(buf):
+    if len(buf) < 8 + 2:
+        raise loxi.ProtocolError("message too short")
+    stats_type, = struct.unpack_from("!H", buf, 8)
+    if stats_type in stats_reply_parsers:
+        return stats_reply_parsers[stats_type](buf)
+    else:
+        raise loxi.ProtocolError("unexpected stats type %u" % stats_type)
+
+def parse_stats_request(buf):
+    if len(buf) < 8 + 2:
+        raise loxi.ProtocolError("message too short")
+    stats_type, = struct.unpack_from("!H", buf, 8)
+    if stats_type in stats_request_parsers:
+        return stats_request_parsers[stats_type](buf)
+    else:
+        raise loxi.ProtocolError("unexpected stats type %u" % stats_type)
+
+def parse_vendor(buf):
+    if len(buf) < 16:
+        raise loxi.ProtocolError("experimenter message too short")
+
+    experimenter, = struct.unpack_from("!L", buf, 8)
+    if experimenter == 0x005c16c7: # Big Switch Networks
+        subtype, = struct.unpack_from("!L", buf, 12)
+    elif experimenter == 0x00002320: # Nicira
+        subtype, = struct.unpack_from("!L", buf, 12)
+    else:
+        raise loxi.ProtocolError("unexpected experimenter id %#x" % experimenter)
+
+    if subtype in experimenter_parsers[experimenter]:
+        return experimenter_parsers[experimenter][subtype](buf)
+    else:
+        raise loxi.ProtocolError("unexpected experimenter %#x subtype %#x" % (experimenter, subtype))
+
+parsers = {
+:: sort_key = lambda x: x.type_members[1].value
+:: msgtype_groups = itertools.groupby(sorted(ofclasses, key=sort_key), sort_key)
+:: for (k, v) in msgtype_groups:
+:: v = list(v)
+:: if len(v) == 1:
+    ${k} : ${v[0].pyname}.unpack,
+:: else:
+    ${k} : parse_${k[11:].lower()},
+:: #endif
+:: #endfor
+}
+
+flow_mod_parsers = {
+    const.OFPFC_ADD : flow_add.unpack,
+    const.OFPFC_MODIFY : flow_modify.unpack,
+    const.OFPFC_MODIFY_STRICT : flow_modify_strict.unpack,
+    const.OFPFC_DELETE : flow_delete.unpack,
+    const.OFPFC_DELETE_STRICT : flow_delete_strict.unpack,
+}
+
+stats_reply_parsers = {
+    const.OFPST_DESC : desc_stats_reply.unpack,
+    const.OFPST_FLOW : flow_stats_reply.unpack,
+    const.OFPST_AGGREGATE : aggregate_stats_reply.unpack,
+    const.OFPST_TABLE : table_stats_reply.unpack,
+    const.OFPST_PORT : port_stats_reply.unpack,
+    const.OFPST_QUEUE : queue_stats_reply.unpack,
+    const.OFPST_VENDOR : experimenter_stats_reply.unpack,
+}
+
+stats_request_parsers = {
+    const.OFPST_DESC : desc_stats_request.unpack,
+    const.OFPST_FLOW : flow_stats_request.unpack,
+    const.OFPST_AGGREGATE : aggregate_stats_request.unpack,
+    const.OFPST_TABLE : table_stats_request.unpack,
+    const.OFPST_PORT : port_stats_request.unpack,
+    const.OFPST_QUEUE : queue_stats_request.unpack,
+    const.OFPST_VENDOR : experimenter_stats_request.unpack,
+}
+
+:: experimenter_ofclasses = [x for x in ofclasses if x.type_members[1].value == 'const.OFPT_VENDOR']
+:: sort_key = lambda x: x.type_members[2].value
+:: experimenter_ofclasses.sort(key=sort_key)
+:: grouped = itertools.groupby(experimenter_ofclasses, sort_key)
+experimenter_parsers = {
+:: for (experimenter, v) in grouped:
+    ${experimenter} : {
+:: for ofclass in v:
+        ${ofclass.type_members[3].value}: ${ofclass.pyname}.unpack,
+:: #endfor
+    },
+:: #endfor
+}
diff --git a/py_gen/templates/pp.py b/py_gen/templates/pp.py
new file mode 100644
index 0000000..0021a28
--- /dev/null
+++ b/py_gen/templates/pp.py
@@ -0,0 +1,277 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+# Copyright 2013, Big Switch Networks, Inc.
+
+"""
+pp - port of Ruby's PP library
+Also based on Lindig, C., & GbR, G. D. (2000). Strictly Pretty.
+
+Example usage:
+>>> import pp.pp as pp
+>>> print pp([[1, 2], [3, 4]], maxwidth=15)
+[
+  [ 1, 2 ],
+  [ 3, 4 ]
+]
+"""
+import unittest
+from contextlib import contextmanager
+
+def pp(obj, maxwidth=79):
+    """
+    Pretty-print the given object.
+    """
+    ctx = PrettyPrinter(maxwidth=maxwidth)
+    ctx.pp(obj)
+    return str(ctx)
+
+
+## Pretty-printers for builtin classes
+
+def pretty_print_list(pp, obj):
+    with pp.group():
+        pp.text('[')
+        with pp.indent(2):
+            for v in obj:
+                if not pp.first(): pp.text(',')
+                pp.breakable()
+                pp.pp(v)
+        pp.breakable()
+        pp.text(']')
+
+def pretty_print_dict(pp, obj):
+    with pp.group():
+        pp.text('{')
+        with pp.indent(2):
+            for (k, v) in sorted(obj.items()):
+                if not pp.first(): pp.text(',')
+                pp.breakable()
+                pp.pp(k)
+                pp.text(': ')
+                pp.pp(v)
+        pp.breakable()
+        pp.text('}')
+
+pretty_printers = {
+    list: pretty_print_list,
+    dict: pretty_print_dict,
+}
+
+
+## Implementation
+
+class PrettyPrinter(object):
+    def __init__(self, maxwidth):
+        self.maxwidth = maxwidth
+        self.cur_indent = 0
+        self.root_group = Group()
+        self.group_stack = [self.root_group]
+
+    def current_group(self):
+        return self.group_stack[-1]
+
+    def text(self, s):
+        self.current_group().append(str(s))
+
+    def breakable(self, sep=' '):
+        self.current_group().append(Breakable(sep, self.cur_indent))
+
+    def first(self):
+        return self.current_group().first()
+
+    @contextmanager
+    def indent(self, n):
+        self.cur_indent += n
+        yield
+        self.cur_indent -= n
+
+    @contextmanager
+    def group(self):
+        self.group_stack.append(Group())
+        yield
+        new_group = self.group_stack.pop()
+        self.current_group().append(new_group)
+
+    def pp(self, obj):
+        if hasattr(obj, "pretty_print"):
+            obj.pretty_print(self)
+        elif type(obj) in pretty_printers:
+            pretty_printers[type(obj)](self, obj)
+        else:
+            self.text(repr(obj))
+
+    def __str__(self):
+        return self.root_group.render(0, self.maxwidth)
+
+class Group(object):
+    __slots__ = ["fragments", "length", "_first"]
+
+    def __init__(self):
+        self.fragments = []
+        self.length = 0
+        self._first = True
+
+    def append(self, x):
+        self.fragments.append(x)
+        self.length += len(x)
+
+    def first(self):
+        if self._first:
+            self._first = False
+            return True
+        return False
+
+    def __len__(self):
+        return self.length
+
+    def render(self, curwidth, maxwidth):
+        dobreak = len(self) > (maxwidth - curwidth)
+
+        a = []
+        for x in self.fragments:
+            if isinstance(x, Breakable):
+                if dobreak:
+                    a.append('\n')
+                    a.append(' ' * x.indent)
+                    curwidth = 0
+                else:
+                    a.append(x.sep)
+            elif isinstance(x, Group):
+                a.append(x.render(curwidth, maxwidth))
+            else:
+                a.append(x)
+            curwidth += len(a[-1])
+        return ''.join(a)
+
+class Breakable(object):
+    __slots__ = ["sep", "indent"]
+
+    def __init__(self, sep, indent):
+        self.sep = sep
+        self.indent = indent
+
+    def __len__(self):
+        return len(self.sep)
+
+
+## Tests
+
+class TestPP(unittest.TestCase):
+    def test_scalars(self):
+        self.assertEquals(pp(1), "1")
+        self.assertEquals(pp("foo"), "'foo'")
+
+    def test_hash(self):
+        expected = """{ 1: 'a', 'b': 2 }"""
+        self.assertEquals(pp(eval(expected)), expected)
+        expected = """\
+{
+  1: 'a',
+  'b': 2
+}"""
+        self.assertEquals(pp(eval(expected), maxwidth=0), expected)
+
+    def test_array(self):
+        expected = """[ 1, 'a', 2 ]"""
+        self.assertEquals(pp(eval(expected)), expected)
+        expected = """\
+[
+  1,
+  'a',
+  2
+]"""
+        self.assertEquals(pp(eval(expected), maxwidth=0), expected)
+
+    def test_nested(self):
+        expected = """[ [ 1, 2 ], [ 3, 4 ] ]"""
+        self.assertEquals(pp(eval(expected)), expected)
+        expected = """\
+[
+  [
+    1,
+    2
+  ],
+  [
+    3,
+    4
+  ]
+]"""
+        self.assertEquals(pp(eval(expected), maxwidth=0), expected)
+
+    def test_breaking(self):
+        expected = """\
+[
+  [ 1, 2 ],
+  'abcdefghijklmnopqrstuvwxyz'
+]"""
+        self.assertEquals(pp(eval(expected), maxwidth=24), expected)
+        expected = """\
+[
+  [ 'abcd', 2 ],
+  [ '0123456789' ],
+  [
+    '0123456789',
+    'abcdefghij'
+  ],
+  [ 'abcdefghijklmnop' ],
+  [
+    'abcdefghijklmnopq'
+  ],
+  { 'k': 'v' },
+  {
+    1: [ 2, [ 3, 4 ] ],
+    'foo': 'abcdefghijklmnop'
+  }
+]"""
+        self.assertEquals(pp(eval(expected), maxwidth=24), expected)
+        expected = """\
+[
+  [ 1, 2 ],
+  [ 3, 4 ]
+]"""
+        self.assertEquals(pp(eval(expected), maxwidth=15), expected)
+
+    # This is an edge case where our simpler algorithm breaks down.
+    @unittest.expectedFailure
+    def test_greedy_breaking(self):
+        expected = """\
+abc def
+ghijklmnopqrstuvwxyz\
+"""
+        pp = PrettyPrinter(maxwidth=8)
+        pp.text("abc")
+        with pp.group():
+            pp.breakable()
+        pp.text("def")
+        with pp.group():
+            pp.breakable()
+        pp.text("ghijklmnopqrstuvwxyz")
+        self.assertEquals(str(pp), expected)
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/py_gen/templates/toplevel_init.py b/py_gen/templates/toplevel_init.py
new file mode 100644
index 0000000..c990aa3
--- /dev/null
+++ b/py_gen/templates/toplevel_init.py
@@ -0,0 +1,46 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+def protocol(ver):
+    """
+    Import and return the protocol module for the given wire version.
+    """
+    if ver == 1:
+        import of10
+        return of10
+    else:
+        raise ValueError
+
+class ProtocolError(Exception):
+    """
+    Raised when failing to deserialize an invalid OpenFlow message.
+    """
+    pass
diff --git a/py_gen/templates/util.py b/py_gen/templates/util.py
new file mode 100644
index 0000000..5933e14
--- /dev/null
+++ b/py_gen/templates/util.py
@@ -0,0 +1,77 @@
+:: # Copyright 2013, Big Switch Networks, Inc.
+:: #
+:: # LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+:: # the following special exception:
+:: #
+:: # LOXI Exception
+:: #
+:: # As a special exception to the terms of the EPL, you may distribute libraries
+:: # generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+:: # that copyright and licensing notices generated by LoxiGen are not altered or removed
+:: # from the LoxiGen Libraries and the notice provided below is (i) included in
+:: # the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+:: # documentation for the LoxiGen Libraries, if distributed in binary form.
+:: #
+:: # Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+:: #
+:: # You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+:: # a copy of the EPL at:
+:: #
+:: # http://www.eclipse.org/legal/epl-v10.html
+:: #
+:: # Unless required by applicable law or agreed to in writing, software
+:: # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+:: # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+:: # EPL for the specific language governing permissions and limitations
+:: # under the EPL.
+::
+:: include('_copyright.py')
+
+:: include('_autogen.py')
+
+import loxi
+import const
+
+def unpack_array(deserializer, element_size, buf):
+    """
+    Deserialize an array of fixed length elements.
+    The deserializer function should take a buffer and return the new object.
+    """
+    if len(buf) % element_size != 0: raise loxi.ProtocolError("invalid array length")
+    n = len(buf) / element_size
+    return [deserializer(buffer(buf, i*element_size, element_size)) for i in range(n)]
+
+def pretty_mac(mac):
+    return ':'.join(["%02x" % x for x in mac])
+
+def pretty_ipv4(v):
+    return "%d.%d.%d.%d" % ((v >> 24) & 0xFF, (v >> 16) & 0xFF, (v >> 8) & 0xFF, v & 0xFF)
+
+def pretty_flags(v, flag_names):
+    set_flags = []
+    for flag_name in flag_names:
+        flag_value = getattr(const, flag_name)
+        if v & flag_value == flag_value:
+            set_flags.append(flag_name)
+        elif v & flag_value:
+            set_flags.append('%s&%#x' % (flag_name, v & flag_value))
+        v &= ~flag_value
+    if v:
+        set_flags.append("%#x" % v)
+    return '|'.join(set_flags) or '0'
+
+def pretty_wildcards(v):
+    if v == const.OFPFW_ALL:
+        return 'OFPFW_ALL'
+    flag_names = ['OFPFW_IN_PORT', 'OFPFW_DL_VLAN', 'OFPFW_DL_SRC', 'OFPFW_DL_DST',
+                  'OFPFW_DL_TYPE', 'OFPFW_NW_PROTO', 'OFPFW_TP_SRC', 'OFPFW_TP_DST',
+                  'OFPFW_NW_SRC_MASK', 'OFPFW_NW_DST_MASK', 'OFPFW_DL_VLAN_PCP',
+                  'OFPFW_NW_TOS']
+    return pretty_flags(v, flag_names)
+
+def pretty_port(v):
+    named_ports = [(k,v2) for (k,v2) in const.__dict__.iteritems() if k.startswith('OFPP_')]
+    for (k, v2) in named_ports:
+        if v == v2:
+            return k
+    return v
diff --git a/py_gen/tests.py b/py_gen/tests.py
new file mode 100644
index 0000000..23d40b8
--- /dev/null
+++ b/py_gen/tests.py
@@ -0,0 +1,1002 @@
+#!/usr/bin/env python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+import unittest
+
+try:
+    import loxi
+    del loxi
+except ImportError:
+    exit("loxi package not found. Try setting PYTHONPATH.")
+
+class TestImports(unittest.TestCase):
+    def test_toplevel(self):
+        import loxi
+        self.assertTrue(hasattr(loxi, "ProtocolError"))
+        ofp = loxi.protocol(1)
+        self.assertEquals(ofp.OFP_VERSION, 1)
+        self.assertTrue(hasattr(ofp, "action"))
+        self.assertTrue(hasattr(ofp, "common"))
+        self.assertTrue(hasattr(ofp, "const"))
+        self.assertTrue(hasattr(ofp, "message"))
+
+    def test_version(self):
+        import loxi.of10
+        self.assertTrue(hasattr(loxi.of10, "ProtocolError"))
+        self.assertTrue(hasattr(loxi.of10, "OFP_VERSION"))
+        self.assertEquals(loxi.of10.OFP_VERSION, 1)
+        self.assertTrue(hasattr(loxi.of10, "action"))
+        self.assertTrue(hasattr(loxi.of10, "common"))
+        self.assertTrue(hasattr(loxi.of10, "const"))
+        self.assertTrue(hasattr(loxi.of10, "message"))
+
+class TestActions(unittest.TestCase):
+    def test_output_pack(self):
+        import loxi.of10 as ofp
+        expected = "\x00\x00\x00\x08\xff\xf8\xff\xff"
+        action = ofp.action.output(port=ofp.OFPP_IN_PORT, max_len=0xffff)
+        self.assertEquals(expected, action.pack())
+
+    def test_output_unpack(self):
+        import loxi.of10 as ofp
+
+        # Normal case
+        buf = "\x00\x00\x00\x08\xff\xf8\xff\xff"
+        action = ofp.action.output.unpack(buf)
+        self.assertEqual(action.port, ofp.OFPP_IN_PORT)
+        self.assertEqual(action.max_len, 0xffff)
+
+        # Invalid length
+        buf = "\x00\x00\x00\x09\xff\xf8\xff\xff\x00"
+        with self.assertRaises(ofp.ProtocolError):
+            ofp.action.output.unpack(buf)
+
+    def test_output_equality(self):
+        import loxi.of10 as ofp
+        action = ofp.action.output(port=1, max_len=0x1234)
+        action2 = ofp.action.output(port=1, max_len=0x1234)
+        self.assertEquals(action, action2)
+
+        action2.port = 2
+        self.assertNotEquals(action, action2)
+        action2.port = 1
+
+        action2.max_len = 0xffff
+        self.assertNotEquals(action, action2)
+        action2.max_len = 0x1234
+
+    def test_output_show(self):
+        import loxi.of10 as ofp
+        action = ofp.action.output(port=1, max_len=0x1234)
+        expected = "output { port = 1, max_len = 0x1234 }"
+        self.assertEquals(expected, action.show())
+
+    def test_bsn_set_tunnel_dst_pack(self):
+        import loxi.of10 as ofp
+        expected = ''.join([
+            "\xff\xff", "\x00\x10", # type/length
+            "\x00\x5c\x16\xc7", # experimenter
+            "\x00\x00\x00\x02", # subtype
+            "\x12\x34\x56\x78" # dst
+        ])
+        action = ofp.action.bsn_set_tunnel_dst(dst=0x12345678)
+        self.assertEquals(expected, action.pack())
+
+    def test_bsn_set_tunnel_dst_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            "\xff\xff", "\x00\x10", # type/length
+            "\x00\x5c\x16\xc7", # experimenter
+            "\x00\x00\x00\x02", # subtype
+            "\x12\x34\x56\x78" # dst
+        ])
+        action = ofp.action.bsn_set_tunnel_dst.unpack(buf)
+        self.assertEqual(action.subtype, 2)
+        self.assertEqual(action.dst, 0x12345678)
+
+# Assumes action serialization/deserialization works
+class TestActionList(unittest.TestCase):
+    def test_normal(self):
+        import loxi.of10 as ofp
+
+        expected = []
+        bufs = []
+
+        def add(action):
+            expected.append(action)
+            bufs.append(action.pack())
+
+        add(ofp.action.output(port=1, max_len=0xffff))
+        add(ofp.action.output(port=2, max_len=0xffff))
+        add(ofp.action.output(port=ofp.OFPP_IN_PORT, max_len=0xffff))
+        add(ofp.action.bsn_set_tunnel_dst(dst=0x12345678))
+        add(ofp.action.nicira_dec_ttl())
+
+        actions = ofp.action.unpack_list(''.join(bufs))
+        self.assertEquals(actions, expected)
+
+    def test_empty_list(self):
+        import loxi.of10 as ofp
+        self.assertEquals(ofp.action.unpack_list(''), [])
+
+    def test_invalid_list_length(self):
+        import loxi.of10 as ofp
+        buf = '\x00' * 9
+        with self.assertRaisesRegexp(ofp.ProtocolError, 'not a multiple of 8'):
+            ofp.action.unpack_list(buf)
+
+    def test_invalid_action_length(self):
+        import loxi.of10 as ofp
+
+        buf = '\x00' * 8
+        with self.assertRaisesRegexp(ofp.ProtocolError, 'is 0'):
+            ofp.action.unpack_list(buf)
+
+        buf = '\x00\x00\x00\x04'
+        with self.assertRaisesRegexp(ofp.ProtocolError, 'not a multiple of 8'):
+            ofp.action.unpack_list(buf)
+
+        buf = '\x00\x00\x00\x10\x00\x00\x00\x00'
+        with self.assertRaisesRegexp(ofp.ProtocolError, 'overrun'):
+            ofp.action.unpack_list(buf)
+
+    def test_invalid_action_type(self):
+        import loxi.of10 as ofp
+        buf = '\xff\xfe\x00\x08\x00\x00\x00\x00'
+        with self.assertRaisesRegexp(ofp.ProtocolError, 'unknown action type'):
+            ofp.action.unpack_list(buf)
+
+class TestConstants(unittest.TestCase):
+    def test_ports(self):
+        import loxi.of10 as ofp
+        self.assertEquals(0xffff, ofp.OFPP_NONE)
+
+    def test_wildcards(self):
+        import loxi.of10 as ofp
+        self.assertEquals(0xfc000, ofp.OFPFW_NW_DST_MASK)
+
+class TestCommon(unittest.TestCase):
+    def test_port_desc_pack(self):
+        import loxi.of10 as ofp
+        obj = ofp.port_desc(port_no=ofp.OFPP_CONTROLLER,
+                            hw_addr=[1,2,3,4,5,6],
+                            name="foo",
+                            config=ofp.OFPPC_NO_FLOOD,
+                            state=ofp.OFPPS_STP_FORWARD,
+                            curr=ofp.OFPPF_10MB_HD,
+                            advertised=ofp.OFPPF_1GB_FD,
+                            supported=ofp.OFPPF_AUTONEG,
+                            peer=ofp.OFPPF_PAUSE_ASYM)
+        expected = ''.join([
+            '\xff\xfd', # port_no
+            '\x01\x02\x03\x04\x05\x06', # hw_addr
+            'foo'.ljust(16, '\x00'), # name
+            '\x00\x00\x00\x10', # config
+            '\x00\x00\x02\x00', # state
+            '\x00\x00\x00\x01', # curr
+            '\x00\x00\x00\x20', # advertised
+            '\x00\x00\x02\x00', # supported
+            '\x00\x00\x08\x00', # peer
+        ])
+        self.assertEquals(expected, obj.pack())
+
+    def test_port_desc_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\xff\xfd', # port_no
+            '\x01\x02\x03\x04\x05\x06', # hw_addr
+            'foo'.ljust(16, '\x00'), # name
+            '\x00\x00\x00\x10', # config
+            '\x00\x00\x02\x00', # state
+            '\x00\x00\x00\x01', # curr
+            '\x00\x00\x00\x20', # advertised
+            '\x00\x00\x02\x00', # supported
+            '\x00\x00\x08\x00', # peer
+        ])
+        obj = ofp.port_desc.unpack(buf)
+        self.assertEquals(ofp.OFPP_CONTROLLER, obj.port_no)
+        self.assertEquals('foo', obj.name)
+        self.assertEquals(ofp.OFPPF_PAUSE_ASYM, obj.peer)
+
+    def test_table_stats_entry_pack(self):
+        import loxi.of10 as ofp
+        obj = ofp.table_stats_entry(table_id=3,
+                                    name="foo",
+                                    wildcards=ofp.OFPFW_ALL,
+                                    max_entries=5,
+                                    active_count=2,
+                                    lookup_count=1099511627775,
+                                    matched_count=9300233470495232273L)
+        expected = ''.join([
+            '\x03', # table_id
+            '\x00\x00\x00', # pad
+            'foo'.ljust(32, '\x00'), # name
+            '\x00\x3f\xFF\xFF', # wildcards
+            '\x00\x00\x00\x05', # max_entries
+            '\x00\x00\x00\x02', # active_count
+            '\x00\x00\x00\xff\xff\xff\xff\xff', # lookup_count
+            '\x81\x11\x11\x11\x11\x11\x11\x11', # matched_count
+        ])
+        self.assertEquals(expected, obj.pack())
+
+    def test_table_stats_entry_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\x03', # table_id
+            '\x00\x00\x00', # pad
+            'foo'.ljust(32, '\x00'), # name
+            '\x00\x3f\xFF\xFF', # wildcards
+            '\x00\x00\x00\x05', # max_entries
+            '\x00\x00\x00\x02', # active_count
+            '\x00\x00\x00\xff\xff\xff\xff\xff', # lookup_count
+            '\x81\x11\x11\x11\x11\x11\x11\x11', # matched_count
+        ])
+        obj = ofp.table_stats_entry.unpack(buf)
+        self.assertEquals(3, obj.table_id)
+        self.assertEquals('foo', obj.name)
+        self.assertEquals(9300233470495232273L, obj.matched_count)
+
+    def test_flow_stats_entry_pack(self):
+        import loxi.of10 as ofp
+        obj = ofp.flow_stats_entry(table_id=3,
+                                   match=ofp.match(),
+                                   duration_sec=1,
+                                   duration_nsec=2,
+                                   priority=100,
+                                   idle_timeout=5,
+                                   hard_timeout=10,
+                                   cookie=0x0123456789abcdef,
+                                   packet_count=10,
+                                   byte_count=1000,
+                                   actions=[ofp.action.output(port=1),
+                                            ofp.action.output(port=2)])
+        expected = ''.join([
+            '\x00\x68', # length
+            '\x03', # table_id
+            '\x00', # pad
+            '\x00\x3f\xff\xff', # match.wildcards
+            '\x00' * 36, # remaining match fields
+            '\x00\x00\x00\x01', # duration_sec
+            '\x00\x00\x00\x02', # duration_nsec
+            '\x00\x64', # priority
+            '\x00\x05', # idle_timeout
+            '\x00\x0a', # hard_timeout
+            '\x00' * 6, # pad2
+            '\x01\x23\x45\x67\x89\xab\xcd\xef', # cookie
+            '\x00\x00\x00\x00\x00\x00\x00\x0a', # packet_count
+            '\x00\x00\x00\x00\x00\x00\x03\xe8', # byte_count
+            '\x00\x00', # actions[0].type
+            '\x00\x08', # actions[0].len
+            '\x00\x01', # actions[0].port
+            '\x00\x00', # actions[0].max_len
+            '\x00\x00', # actions[1].type
+            '\x00\x08', # actions[1].len
+            '\x00\x02', # actions[1].port
+            '\x00\x00', # actions[1].max_len
+        ])
+        self.assertEquals(expected, obj.pack())
+
+    def test_flow_stats_entry_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\x00\x68', # length
+            '\x03', # table_id
+            '\x00', # pad
+            '\x00\x3f\xff\xff', # match.wildcards
+            '\x00' * 36, # remaining match fields
+            '\x00\x00\x00\x01', # duration_sec
+            '\x00\x00\x00\x02', # duration_nsec
+            '\x00\x64', # priority
+            '\x00\x05', # idle_timeout
+            '\x00\x0a', # hard_timeout
+            '\x00' * 6, # pad2
+            '\x01\x23\x45\x67\x89\xab\xcd\xef', # cookie
+            '\x00\x00\x00\x00\x00\x00\x00\x0a', # packet_count
+            '\x00\x00\x00\x00\x00\x00\x03\xe8', # byte_count
+            '\x00\x00', # actions[0].type
+            '\x00\x08', # actions[0].len
+            '\x00\x01', # actions[0].port
+            '\x00\x00', # actions[0].max_len
+            '\x00\x00', # actions[1].type
+            '\x00\x08', # actions[1].len
+            '\x00\x02', # actions[1].port
+            '\x00\x00', # actions[1].max_len
+        ])
+        obj = ofp.flow_stats_entry.unpack(buf)
+        self.assertEquals(3, obj.table_id)
+        self.assertEquals(ofp.OFPFW_ALL, obj.match.wildcards)
+        self.assertEquals(2, len(obj.actions))
+        self.assertEquals(1, obj.actions[0].port)
+        self.assertEquals(2, obj.actions[1].port)
+
+    def test_match(self):
+        import loxi.of10 as ofp
+        match = ofp.match()
+        self.assertEquals(match.wildcards, ofp.OFPFW_ALL)
+        self.assertEquals(match.tcp_src, 0)
+        buf = match.pack()
+        match2 = ofp.match.unpack(buf)
+        self.assertEquals(match, match2)
+
+class TestMessages(unittest.TestCase):
+    def test_hello_construction(self):
+        import loxi.of10 as ofp
+
+        msg = ofp.message.hello()
+        self.assertEquals(msg.version, ofp.OFP_VERSION)
+        self.assertEquals(msg.type, ofp.OFPT_HELLO)
+        self.assertEquals(msg.xid, None)
+
+        msg = ofp.message.hello(xid=123)
+        self.assertEquals(msg.xid, 123)
+
+        # 0 is a valid xid distinct from None
+        msg = ofp.message.hello(xid=0)
+        self.assertEquals(msg.xid, 0)
+
+    def test_hello_unpack(self):
+        import loxi.of10 as ofp
+
+        # Normal case
+        buf = "\x01\x00\x00\x08\x12\x34\x56\x78"
+        msg = ofp.message.hello(xid=0x12345678)
+        self.assertEquals(buf, msg.pack())
+
+        # Invalid length
+        buf = "\x01\x00\x00\x09\x12\x34\x56\x78\x9a"
+        with self.assertRaisesRegexp(ofp.ProtocolError, "should be 8"):
+            ofp.message.hello.unpack(buf)
+
+    def test_echo_request_construction(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.echo_request(data="abc")
+        self.assertEquals(msg.data, "abc")
+
+    def test_echo_request_pack(self):
+        import loxi.of10 as ofp
+
+        msg = ofp.message.echo_request(xid=0x12345678, data="abc")
+        buf = msg.pack()
+        self.assertEquals(buf, "\x01\x02\x00\x0b\x12\x34\x56\x78\x61\x62\x63")
+
+        msg2 = ofp.message.echo_request.unpack(buf)
+        self.assertEquals(msg, msg2)
+
+    def test_echo_request_unpack(self):
+        import loxi.of10 as ofp
+
+        # Normal case
+        buf = "\x01\x02\x00\x0b\x12\x34\x56\x78\x61\x62\x63"
+        msg = ofp.message.echo_request(xid=0x12345678, data="abc")
+        self.assertEquals(buf, msg.pack())
+
+        # Invalid length
+        buf = "\x01\x02\x00\x07\x12\x34\x56"
+        with self.assertRaisesRegexp(ofp.ProtocolError, "buffer too short"):
+            ofp.message.echo_request.unpack(buf)
+
+    def test_echo_request_equality(self):
+        import loxi.of10 as ofp
+
+        msg = ofp.message.echo_request(xid=0x12345678, data="abc")
+        msg2 = ofp.message.echo_request(xid=0x12345678, data="abc")
+        #msg2 = ofp.message.echo_request.unpack(msg.pack())
+        self.assertEquals(msg, msg2)
+
+        msg2.xid = 1
+        self.assertNotEquals(msg, msg2)
+        msg2.xid = msg.xid
+
+        msg2.data = "a"
+        self.assertNotEquals(msg, msg2)
+        msg2.data = msg.data
+
+    def test_echo_request_show(self):
+        import loxi.of10 as ofp
+        expected = "echo_request { xid = 0x12345678, data = 'ab\\x01' }"
+        msg = ofp.message.echo_request(xid=0x12345678, data="ab\x01")
+        self.assertEquals(msg.show(), expected)
+
+    def test_flow_add(self):
+        import loxi.of10 as ofp
+        match = ofp.match()
+        msg = ofp.message.flow_add(xid=1,
+                                   match=match,
+                                   cookie=1,
+                                   idle_timeout=5,
+                                   flags=ofp.OFPFF_CHECK_OVERLAP,
+                                   actions=[
+                                       ofp.action.output(port=1),
+                                       ofp.action.output(port=2),
+                                       ofp.action.output(port=ofp.OFPP_CONTROLLER,
+                                                         max_len=1024)])
+        buf = msg.pack()
+        msg2 = ofp.message.flow_add.unpack(buf)
+        self.assertEquals(msg, msg2)
+
+    def test_port_mod_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.port_mod(xid=2,
+                                   port_no=ofp.OFPP_CONTROLLER,
+                                   hw_addr=[1,2,3,4,5,6],
+                                   config=0x90ABCDEF,
+                                   mask=0xFF11FF11,
+                                   advertise=0xCAFE6789)
+        expected = "\x01\x0f\x00\x20\x00\x00\x00\x02\xff\xfd\x01\x02\x03\x04\x05\x06\x90\xab\xcd\xef\xff\x11\xff\x11\xca\xfe\x67\x89\x00\x00\x00\x00"
+        self.assertEquals(expected, msg.pack())
+
+    def test_desc_stats_reply_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.desc_stats_reply(xid=3,
+                                           flags=ofp.OFPSF_REPLY_MORE,
+                                           mfr_desc="The Indigo-2 Community",
+                                           hw_desc="Unknown server",
+                                           sw_desc="Indigo-2 LRI pre-release",
+                                           serial_num="11235813213455",
+                                           dp_desc="Indigo-2 LRI forwarding module")
+        expected = ''.join([
+            '\x01', '\x11', # version/type
+            '\x04\x2c', # length
+            '\x00\x00\x00\x03', # xid
+            '\x00\x00', # stats_type
+            '\x00\x01', # flags
+            'The Indigo-2 Community'.ljust(256, '\x00'), # mfr_desc
+            'Unknown server'.ljust(256, '\x00'), # hw_desc
+            'Indigo-2 LRI pre-release'.ljust(256, '\x00'), # sw_desc
+            '11235813213455'.ljust(32, '\x00'), # serial_num
+            'Indigo-2 LRI forwarding module'.ljust(256, '\x00'), # dp_desc
+        ])
+        self.assertEquals(expected, msg.pack())
+
+    def test_desc_stats_reply_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\x01', '\x11', # version/type
+            '\x04\x2c', # length
+            '\x00\x00\x00\x03', # xid
+            '\x00\x00', # stats_type
+            '\x00\x01', # flags
+            'The Indigo-2 Community'.ljust(256, '\x00'), # mfr_desc
+            'Unknown server'.ljust(256, '\x00'), # hw_desc
+            'Indigo-2 LRI pre-release'.ljust(256, '\x00'), # sw_desc
+            '11235813213455'.ljust(32, '\x00'), # serial_num
+            'Indigo-2 LRI forwarding module'.ljust(256, '\x00'), # dp_desc
+        ])
+        msg = ofp.message.desc_stats_reply.unpack(buf)
+        self.assertEquals('Indigo-2 LRI forwarding module', msg.dp_desc)
+        self.assertEquals('11235813213455', msg.serial_num)
+        self.assertEquals(ofp.OFPST_DESC, msg.stats_type)
+        self.assertEquals(ofp.OFPSF_REPLY_MORE, msg.flags)
+
+    def test_port_status_pack(self):
+        import loxi.of10 as ofp
+
+        desc = ofp.port_desc(port_no=ofp.OFPP_CONTROLLER,
+                             hw_addr=[1,2,3,4,5,6],
+                             name="foo",
+                             config=ofp.OFPPC_NO_FLOOD,
+                             state=ofp.OFPPS_STP_FORWARD,
+                             curr=ofp.OFPPF_10MB_HD,
+                             advertised=ofp.OFPPF_1GB_FD,
+                             supported=ofp.OFPPF_AUTONEG,
+                             peer=ofp.OFPPF_PAUSE_ASYM)
+
+        msg = ofp.message.port_status(xid=4,
+                                      reason=ofp.OFPPR_DELETE,
+                                      desc=desc)
+        expected = ''.join([
+            '\x01', '\x0c', # version/type
+            '\x00\x40', # length
+            '\x00\x00\x00\x04', # xid
+            '\x01', # reason
+            '\x00\x00\x00\x00\x00\x00\x00' # pad
+            '\xff\xfd', # desc.port_no
+            '\x01\x02\x03\x04\x05\x06', # desc.hw_addr
+            'foo'.ljust(16, '\x00'), # desc.name
+            '\x00\x00\x00\x10', # desc.config
+            '\x00\x00\x02\x00', # desc.state
+            '\x00\x00\x00\x01', # desc.curr
+            '\x00\x00\x00\x20', # desc.advertised
+            '\x00\x00\x02\x00', # desc.supported
+            '\x00\x00\x08\x00', # desc.peer
+        ])
+        self.assertEquals(expected, msg.pack())
+
+    def test_port_status_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\x01', '\x0c', # version/type
+            '\x00\x40', # length
+            '\x00\x00\x00\x04', # xid
+            '\x01', # reason
+            '\x00\x00\x00\x00\x00\x00\x00' # pad
+            '\xff\xfd', # desc.port_no
+            '\x01\x02\x03\x04\x05\x06', # desc.hw_addr
+            'foo'.ljust(16, '\x00'), # desc.name
+            '\x00\x00\x00\x10', # desc.config
+            '\x00\x00\x02\x00', # desc.state
+            '\x00\x00\x00\x01', # desc.curr
+            '\x00\x00\x00\x20', # desc.advertised
+            '\x00\x00\x02\x00', # desc.supported
+            '\x00\x00\x08\x00', # desc.peer
+        ])
+        msg = ofp.message.port_status.unpack(buf)
+        self.assertEquals('foo', msg.desc.name)
+        self.assertEquals(ofp.OFPPF_PAUSE_ASYM, msg.desc.peer)
+
+    def test_port_stats_reply_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.port_stats_reply(xid=5, flags=0, entries=[
+            ofp.port_stats_entry(port_no=1, rx_packets=56, collisions=5),
+            ofp.port_stats_entry(port_no=ofp.OFPP_LOCAL, rx_packets=1, collisions=1)])
+        expected = ''.join([
+            '\x01', '\x11', # version/type
+            '\x00\xdc', # length
+            '\x00\x00\x00\x05', # xid
+            '\x00\x04', # stats_type
+            '\x00\x00', # flags
+            '\x00\x01', # entries[0].port_no
+            '\x00\x00\x00\x00\x00\x00' # entries[0].pad
+            '\x00\x00\x00\x00\x00\x00\x00\x38', # entries[0].rx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_frame_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_over_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_crc_err
+            '\x00\x00\x00\x00\x00\x00\x00\x05', # entries[0].collisions
+            '\xff\xfe', # entries[1].port_no
+            '\x00\x00\x00\x00\x00\x00' # entries[1].pad
+            '\x00\x00\x00\x00\x00\x00\x00\x01', # entries[1].rx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_frame_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_over_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_crc_err
+            '\x00\x00\x00\x00\x00\x00\x00\x01', # entries[1].collisions
+        ])
+        self.assertEquals(expected, msg.pack())
+
+    def test_port_stats_reply_unpack(self):
+        import loxi.of10 as ofp
+        buf = ''.join([
+            '\x01', '\x11', # version/type
+            '\x00\xdc', # length
+            '\x00\x00\x00\x05', # xid
+            '\x00\x04', # stats_type
+            '\x00\x00', # flags
+            '\x00\x01', # entries[0].port_no
+            '\x00\x00\x00\x00\x00\x00' # entries[0].pad
+            '\x00\x00\x00\x00\x00\x00\x00\x38', # entries[0].rx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].tx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_frame_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_over_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[0].rx_crc_err
+            '\x00\x00\x00\x00\x00\x00\x00\x05', # entries[0].collisions
+            '\xff\xfe', # entries[1].port_no
+            '\x00\x00\x00\x00\x00\x00' # entries[1].pad
+            '\x00\x00\x00\x00\x00\x00\x00\x01', # entries[1].rx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_packets
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_bytes
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_dropped
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].tx_errors
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_frame_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_over_err
+            '\x00\x00\x00\x00\x00\x00\x00\x00', # entries[1].rx_crc_err
+            '\x00\x00\x00\x00\x00\x00\x00\x01', # entries[1].collisions
+        ])
+        msg = ofp.message.port_stats_reply.unpack(buf)
+        self.assertEquals(ofp.OFPST_PORT, msg.stats_type)
+        self.assertEquals(2, len(msg.entries))
+
+    sample_flow_stats_reply_buf = ''.join([
+        '\x01', '\x11', # version/type
+        '\x00\xe4', # length
+        '\x00\x00\x00\x06', # xid
+        '\x00\x01', # stats_type
+        '\x00\x00', # flags
+        '\x00\x68', # entries[0].length
+        '\x03', # entries[0].table_id
+        '\x00', # entries[0].pad
+        '\x00\x3f\xff\xff', # entries[0].match.wildcards
+        '\x00' * 36, # remaining match fields
+        '\x00\x00\x00\x01', # entries[0].duration_sec
+        '\x00\x00\x00\x02', # entries[0].duration_nsec
+        '\x00\x64', # entries[0].priority
+        '\x00\x05', # entries[0].idle_timeout
+        '\x00\x0a', # entries[0].hard_timeout
+        '\x00' * 6, # entries[0].pad2
+        '\x01\x23\x45\x67\x89\xab\xcd\xef', # entries[0].cookie
+        '\x00\x00\x00\x00\x00\x00\x00\x0a', # entries[0].packet_count
+        '\x00\x00\x00\x00\x00\x00\x03\xe8', # entries[0].byte_count
+        '\x00\x00', # entries[0].actions[0].type
+        '\x00\x08', # entries[0].actions[0].len
+        '\x00\x01', # entries[0].actions[0].port
+        '\x00\x00', # entries[0].actions[0].max_len
+        '\x00\x00', # entries[0].actions[1].type
+        '\x00\x08', # entries[0].actions[1].len
+        '\x00\x02', # entries[0].actions[1].port
+        '\x00\x00', # entries[0].actions[1].max_len
+        '\x00\x70', # entries[1].length
+        '\x04', # entries[1].table_id
+        '\x00', # entries[1].pad
+        '\x00\x3f\xff\xff', # entries[1].match.wildcards
+        '\x00' * 36, # remaining match fields
+        '\x00\x00\x00\x01', # entries[1].duration_sec
+        '\x00\x00\x00\x02', # entries[1].duration_nsec
+        '\x00\x64', # entries[1].priority
+        '\x00\x05', # entries[1].idle_timeout
+        '\x00\x0a', # entries[1].hard_timeout
+        '\x00' * 6, # entries[1].pad2
+        '\x01\x23\x45\x67\x89\xab\xcd\xef', # entries[1].cookie
+        '\x00\x00\x00\x00\x00\x00\x00\x0a', # entries[1].packet_count
+        '\x00\x00\x00\x00\x00\x00\x03\xe8', # entries[1].byte_count
+        '\x00\x00', # entries[1].actions[0].type
+        '\x00\x08', # entries[1].actions[0].len
+        '\x00\x01', # entries[1].actions[0].port
+        '\x00\x00', # entries[1].actions[0].max_len
+        '\x00\x00', # entries[1].actions[1].type
+        '\x00\x08', # entries[1].actions[1].len
+        '\x00\x02', # entries[1].actions[1].port
+        '\x00\x00', # entries[1].actions[1].max_len
+        '\x00\x00', # entries[1].actions[2].type
+        '\x00\x08', # entries[1].actions[2].len
+        '\x00\x03', # entries[1].actions[2].port
+        '\x00\x00', # entries[1].actions[2].max_len
+    ])
+
+    def test_flow_stats_reply_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.flow_stats_reply(xid=6, flags=0, entries=[
+            ofp.flow_stats_entry(table_id=3,
+                                 match=ofp.match(),
+                                 duration_sec=1,
+                                 duration_nsec=2,
+                                 priority=100,
+                                 idle_timeout=5,
+                                 hard_timeout=10,
+                                 cookie=0x0123456789abcdef,
+                                 packet_count=10,
+                                 byte_count=1000,
+                                 actions=[ofp.action.output(port=1),
+                                          ofp.action.output(port=2)]),
+            ofp.flow_stats_entry(table_id=4,
+                                 match=ofp.match(),
+                                 duration_sec=1,
+                                 duration_nsec=2,
+                                 priority=100,
+                                 idle_timeout=5,
+                                 hard_timeout=10,
+                                 cookie=0x0123456789abcdef,
+                                 packet_count=10,
+                                 byte_count=1000,
+                                 actions=[ofp.action.output(port=1),
+                                          ofp.action.output(port=2),
+                                          ofp.action.output(port=3)])])
+        self.assertEquals(self.sample_flow_stats_reply_buf, msg.pack())
+
+    def test_flow_stats_reply_unpack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.flow_stats_reply.unpack(self.sample_flow_stats_reply_buf)
+        self.assertEquals(ofp.OFPST_FLOW, msg.stats_type)
+        self.assertEquals(2, len(msg.entries))
+        self.assertEquals(2, len(msg.entries[0].actions))
+        self.assertEquals(3, len(msg.entries[1].actions))
+
+    def test_flow_add_show(self):
+        import loxi.of10 as ofp
+        expected = """\
+flow_add {
+  xid = None,
+  match = match_v1 {
+    wildcards = OFPFW_DL_SRC|OFPFW_DL_DST,
+    in_port = 3,
+    eth_src = 01:23:45:67:89:ab,
+    eth_dst = cd:ef:01:23:45:67,
+    vlan_vid = 0x0,
+    vlan_pcp = 0x0,
+    pad1 = 0x0,
+    eth_type = 0x0,
+    ip_dscp = 0x0,
+    ip_proto = 0x0,
+    pad2 = [ 0, 0 ],
+    ipv4_src = 192.168.3.127,
+    ipv4_dst = 255.255.255.255,
+    tcp_src = 0x0,
+    tcp_dst = 0x0
+  },
+  cookie = 0x0,
+  idle_timeout = 0x0,
+  hard_timeout = 0x0,
+  priority = 0x0,
+  buffer_id = 0x0,
+  out_port = 0,
+  flags = 0x0,
+  actions = [
+    output { port = OFPP_FLOOD, max_len = 0x0 },
+    nicira_dec_ttl { pad = 0x0, pad2 = 0x0 },
+    bsn_set_tunnel_dst { dst = 0x0 }
+  ]
+}"""
+        msg = ofp.message.flow_add(
+            match=ofp.match(
+                wildcards=ofp.OFPFW_DL_SRC|ofp.OFPFW_DL_DST,
+                in_port=3,
+                ipv4_src=0xc0a8037f,
+                ipv4_dst=0xffffffff,
+                eth_src=[0x01, 0x23, 0x45, 0x67, 0x89, 0xab],
+                eth_dst=[0xcd, 0xef, 0x01, 0x23, 0x45, 0x67]),
+            actions=[
+                ofp.action.output(port=ofp.OFPP_FLOOD),
+                ofp.action.nicira_dec_ttl(),
+                ofp.action.bsn_set_tunnel_dst()])
+        self.assertEquals(msg.show(), expected)
+
+    sample_packet_out_buf = ''.join([
+        '\x01', '\x0d', # version/type
+        '\x00\x23', # length
+        '\x12\x34\x56\x78', # xid
+        '\xab\xcd\xef\x01', # buffer_id
+        '\xff\xfe', # in_port
+        '\x00\x10', # actions_len
+        '\x00\x00', # actions[0].type
+        '\x00\x08', # actions[0].len
+        '\x00\x01', # actions[0].port
+        '\x00\x00', # actions[0].max_len
+        '\x00\x00', # actions[1].type
+        '\x00\x08', # actions[1].len
+        '\x00\x02', # actions[1].port
+        '\x00\x00', # actions[1].max_len
+        'abc' # data
+    ])
+
+    def test_packet_out_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.packet_out(
+            xid=0x12345678,
+            buffer_id=0xabcdef01,
+            in_port=ofp.OFPP_LOCAL,
+            actions=[
+                ofp.action.output(port=1),
+                ofp.action.output(port=2)],
+            data='abc')
+        self.assertEquals(self.sample_packet_out_buf, msg.pack())
+
+    def test_packet_out_unpack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.packet_out.unpack(self.sample_packet_out_buf)
+        self.assertEquals(0x12345678, msg.xid)
+        self.assertEquals(0xabcdef01, msg.buffer_id)
+        self.assertEquals(ofp.OFPP_LOCAL, msg.in_port)
+        self.assertEquals(2, len(msg.actions))
+        self.assertEquals(1, msg.actions[0].port)
+        self.assertEquals(2, msg.actions[1].port)
+        self.assertEquals('abc', msg.data)
+
+    sample_packet_in_buf = ''.join([
+        '\x01', '\x0a', # version/type
+        '\x00\x15', # length
+        '\x12\x34\x56\x78', # xid
+        '\xab\xcd\xef\x01', # buffer_id
+        '\x00\x09', # total_len
+        '\xff\xfe', # in_port
+        '\x01', # reason
+        '\x00', # pad
+        'abc', # data
+    ])
+
+    def test_packet_in_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.packet_in(
+            xid=0x12345678,
+            buffer_id=0xabcdef01,
+            total_len=9,
+            in_port=ofp.OFPP_LOCAL,
+            reason=ofp.OFPR_ACTION,
+            data='abc')
+        self.assertEquals(self.sample_packet_in_buf, msg.pack())
+
+    def test_packet_in_unpack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.packet_in.unpack(self.sample_packet_in_buf)
+        self.assertEquals(0x12345678, msg.xid)
+        self.assertEquals(0xabcdef01, msg.buffer_id)
+        self.assertEquals(9, msg.total_len)
+        self.assertEquals(ofp.OFPP_LOCAL, msg.in_port)
+        self.assertEquals(ofp.OFPR_ACTION, msg.reason)
+        self.assertEquals('abc', msg.data)
+
+    sample_queue_get_config_reply_buf = ''.join([
+        '\x01', '\x15', # version/type
+        '\x00\x50', # length
+        '\x12\x34\x56\x78', # xid
+        '\xff\xfe', # port
+        '\x00\x00\x00\x00\x00\x00', # pad
+        '\x00\x00\x00\x01', # queues[0].queue_id
+        '\x00\x18', # queues[0].len
+        '\x00\x00', # queues[0].pad
+        '\x00\x01', # queues[0].properties[0].type
+        '\x00\x10', # queues[0].properties[0].length
+        '\x00\x00\x00\x00', # queues[0].properties[0].pad
+        '\x00\x05', # queues[0].properties[0].rate
+        '\x00\x00\x00\x00\x00\x00', # queues[0].properties[0].pad2
+        '\x00\x00\x00\x02', # queues[1].queue_id
+        '\x00\x28', # queues[1].len
+        '\x00\x00', # queues[1].pad
+        '\x00\x01', # queues[1].properties[0].type
+        '\x00\x10', # queues[1].properties[0].length
+        '\x00\x00\x00\x00', # queues[1].properties[0].pad
+        '\x00\x06', # queues[1].properties[0].rate
+        '\x00\x00\x00\x00\x00\x00', # queues[1].properties[0].pad2
+        '\x00\x01', # queues[1].properties[1].type
+        '\x00\x10', # queues[1].properties[1].length
+        '\x00\x00\x00\x00', # queues[1].properties[1].pad
+        '\x00\x07', # queues[1].properties[1].rate
+        '\x00\x00\x00\x00\x00\x00', # queues[1].properties[1].pad2
+    ])
+
+    def test_queue_get_config_reply_pack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.queue_get_config_reply(
+            xid=0x12345678,
+            port=ofp.OFPP_LOCAL,
+            queues=[
+                ofp.packet_queue(queue_id=1, properties=[
+                    ofp.queue_prop_min_rate(rate=5)]),
+                ofp.packet_queue(queue_id=2, properties=[
+                    ofp.queue_prop_min_rate(rate=6),
+                    ofp.queue_prop_min_rate(rate=7)])])
+        self.assertEquals(self.sample_queue_get_config_reply_buf, msg.pack())
+
+    def test_queue_get_config_reply_unpack(self):
+        import loxi.of10 as ofp
+        msg = ofp.message.queue_get_config_reply.unpack(self.sample_queue_get_config_reply_buf)
+        self.assertEquals(ofp.OFPP_LOCAL, msg.port)
+        self.assertEquals(msg.queues[0].queue_id, 1)
+        self.assertEquals(msg.queues[0].properties[0].rate, 5)
+        self.assertEquals(msg.queues[1].queue_id, 2)
+        self.assertEquals(msg.queues[1].properties[0].rate, 6)
+        self.assertEquals(msg.queues[1].properties[1].rate, 7)
+
+class TestParse(unittest.TestCase):
+    def test_parse_header(self):
+        import loxi
+        import loxi.of10 as ofp
+
+        msg_ver, msg_type, msg_len, msg_xid = ofp.message.parse_header("\x01\x04\xAF\xE8\x12\x34\x56\x78")
+        self.assertEquals(1, msg_ver)
+        self.assertEquals(4, msg_type)
+        self.assertEquals(45032, msg_len)
+        self.assertEquals(0x12345678, msg_xid)
+
+        with self.assertRaisesRegexp(loxi.ProtocolError, "too short"):
+            ofp.message.parse_header("\x01\x04\xAF\xE8\x12\x34\x56")
+
+    def test_parse_message(self):
+        import loxi
+        import loxi.of10 as ofp
+
+        buf = "\x01\x00\x00\x08\x12\x34\x56\x78"
+        msg = ofp.message.parse_message(buf)
+        assert(msg.xid == 0x12345678)
+
+        # Get a list of all message classes
+        test_klasses = [x for x in ofp.message.__dict__.values()
+                        if type(x) == type
+                           and issubclass(x, ofp.message.Message)
+                           and x != ofp.message.Message]
+
+        for klass in test_klasses:
+            self.assertIsInstance(ofp.message.parse_message(klass(xid=1).pack()), klass)
+
+class TestUtils(unittest.TestCase):
+    def test_unpack_array(self):
+        import loxi
+        import loxi.of10.util as util
+
+        a = util.unpack_array(str, 3, "abcdefghi")
+        self.assertEquals(['abc', 'def', 'ghi'], a)
+
+        with self.assertRaisesRegexp(loxi.ProtocolError, "invalid array length"):
+            util.unpack_array(str, 3, "abcdefgh")
+
+    def test_pretty_wildcards(self):
+        import loxi.of10 as ofp
+        self.assertEquals("OFPFW_ALL", ofp.util.pretty_wildcards(ofp.OFPFW_ALL))
+        self.assertEquals("0", ofp.util.pretty_wildcards(0))
+        self.assertEquals("OFPFW_DL_SRC|OFPFW_DL_DST",
+                          ofp.util.pretty_wildcards(ofp.OFPFW_DL_SRC|ofp.OFPFW_DL_DST))
+        self.assertEquals("OFPFW_NW_SRC_MASK&0x2000",
+                          ofp.util.pretty_wildcards(ofp.OFPFW_NW_SRC_ALL))
+        self.assertEquals("OFPFW_NW_SRC_MASK&0x1a00",
+                          ofp.util.pretty_wildcards(0x00001a00))
+        self.assertEquals("OFPFW_IN_PORT|0x80000000",
+                          ofp.util.pretty_wildcards(ofp.OFPFW_IN_PORT|0x80000000))
+
+class TestAll(unittest.TestCase):
+    """
+    Round-trips every class through serialization/deserialization.
+    Not a replacement for handcoded tests because it only uses the
+    default member values.
+    """
+
+    def setUp(self):
+        import loxi.of10 as ofp
+        mods = [ofp.action,ofp.message,ofp.common]
+        self.klasses = [klass for mod in mods
+                              for klass in mod.__dict__.values()
+                              if hasattr(klass, 'show')]
+        self.klasses.sort(key=lambda x: str(x))
+
+    def test_serialization(self):
+        import loxi.of10 as ofp
+        expected_failures = []
+        for klass in self.klasses:
+            def fn():
+                obj = klass()
+                if hasattr(obj, "xid"): obj.xid = 42
+                buf = obj.pack()
+                obj2 = klass.unpack(buf)
+                self.assertEquals(obj, obj2)
+            if klass in expected_failures:
+                self.assertRaises(Exception, fn)
+            else:
+                fn()
+
+    def test_show(self):
+        import loxi.of10 as ofp
+        expected_failures = []
+        for klass in self.klasses:
+            def fn():
+                obj = klass()
+                if hasattr(obj, "xid"): obj.xid = 42
+                obj.show()
+            if klass in expected_failures:
+                self.assertRaises(Exception, fn)
+            else:
+                fn()
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/py_gen/util.py b/py_gen/util.py
new file mode 100644
index 0000000..fbb2825
--- /dev/null
+++ b/py_gen/util.py
@@ -0,0 +1,79 @@
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+"""
+Utilities for generating the target Python code
+"""
+
+import os
+import of_g
+import loxi_front_end.type_maps as type_maps
+import loxi_utils.loxi_utils as utils
+
+templates_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'templates')
+
+def render_template(out, name, **context):
+    utils.render_template(out, name, [templates_dir], context)
+
+def render_static(out, name):
+    utils.render_static(out, name, [templates_dir])
+
+def lookup_unified_class(cls, version):
+    unified_class = of_g.unified[cls][version]
+    if "use_version" in unified_class: # deref version ref
+        ref_version = unified_class["use_version"]
+        unified_class = of_g.unified[cls][ref_version]
+    return unified_class
+
+def primary_wire_type(cls, version):
+    if cls in type_maps.stats_reply_list:
+        return type_maps.type_val[("of_stats_reply", version)]
+    elif cls in type_maps.stats_request_list:
+        return type_maps.type_val[("of_stats_request", version)]
+    elif cls in type_maps.flow_mod_list:
+        return type_maps.type_val[("of_flow_mod", version)]
+    elif (cls, version) in type_maps.type_val:
+        return type_maps.type_val[(cls, version)]
+    elif type_maps.message_is_extension(cls, version):
+        return type_maps.type_val[("of_experimenter", version)]
+    elif type_maps.action_is_extension(cls, version):
+        return type_maps.type_val[("of_action_experimenter", version)]
+    elif type_maps.action_id_is_extension(cls, version):
+        return type_maps.type_val[("of_action_id_experimenter", version)]
+    elif type_maps.instruction_is_extension(cls, version):
+        return type_maps.type_val[("of_instruction_experimenter", version)]
+    elif type_maps.queue_prop_is_extension(cls, version):
+        return type_maps.type_val[("of_queue_prop_experimenter", version)]
+    elif type_maps.table_feature_prop_is_extension(cls, version):
+        return type_maps.type_val[("of_table_feature_prop_experimenter", version)]
+    else:
+        raise ValueError
+
+def constant_for_value(version, group, value):
+    return (["const." + v["ofp_name"] for k, v in of_g.identifiers.items()
+             if k in of_g.identifiers_by_group[group] and
+                eval(v["values_by_version"].get(version, "None")) == value] or [value])[0]
diff --git a/pyparsing.py b/pyparsing.py
new file mode 100644
index 0000000..9be97dc
--- /dev/null
+++ b/pyparsing.py
@@ -0,0 +1,3749 @@
+# module pyparsing.py

+#

+# Copyright (c) 2003-2011  Paul T. McGuire

+#

+# Permission is hereby granted, free of charge, to any person obtaining

+# a copy of this software and associated documentation files (the

+# "Software"), to deal in the Software without restriction, including

+# without limitation the rights to use, copy, modify, merge, publish,

+# distribute, sublicense, and/or sell copies of the Software, and to

+# permit persons to whom the Software is furnished to do so, subject to

+# the following conditions:

+#

+# The above copyright notice and this permission notice shall be

+# included in all copies or substantial portions of the Software.

+#

+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,

+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF

+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

+# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY

+# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,

+# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE

+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

+#

+#from __future__ import generators

+

+__doc__ = \

+"""

+pyparsing module - Classes and methods to define and execute parsing grammars

+

+The pyparsing module is an alternative approach to creating and executing simple grammars,

+vs. the traditional lex/yacc approach, or the use of regular expressions.  With pyparsing, you

+don't need to learn a new syntax for defining grammars or matching expressions - the parsing module

+provides a library of classes that you use to construct the grammar directly in Python.

+

+Here is a program to parse "Hello, World!" (or any greeting of the form C{"<salutation>, <addressee>!"})::

+

+    from pyparsing import Word, alphas

+

+    # define grammar of a greeting

+    greet = Word( alphas ) + "," + Word( alphas ) + "!"

+

+    hello = "Hello, World!"

+    print hello, "->", greet.parseString( hello )

+

+The program outputs the following::

+

+    Hello, World! -> ['Hello', ',', 'World', '!']

+

+The Python representation of the grammar is quite readable, owing to the self-explanatory

+class names, and the use of '+', '|' and '^' operators.

+

+The parsed results returned from C{parseString()} can be accessed as a nested list, a dictionary, or an

+object with named attributes.

+

+The pyparsing module handles some of the problems that are typically vexing when writing text parsers:

+ - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello  ,  World  !", etc.)

+ - quoted strings

+ - embedded comments

+"""

+

+__version__ = "1.5.6"

+__versionTime__ = "26 June 2011 10:53"

+__author__ = "Paul McGuire <ptmcg@users.sourceforge.net>"

+

+import string

+from weakref import ref as wkref

+import copy

+import sys

+import warnings

+import re

+import sre_constants

+#~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) )

+

+__all__ = [

+'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty',

+'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal',

+'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or',

+'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException',

+'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException',

+'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'Upcase',

+'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore',

+'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col',

+'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString',

+'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'getTokensEndLoc', 'hexnums',

+'htmlComment', 'javaStyleComment', 'keepOriginalText', 'line', 'lineEnd', 'lineStart', 'lineno',

+'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral',

+'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables',

+'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity', 

+'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd',

+'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute',

+'indentedBlock', 'originalTextFor',

+]

+

+"""

+Detect if we are running version 3.X and make appropriate changes

+Robert A. Clark

+"""

+_PY3K = sys.version_info[0] > 2

+if _PY3K:

+    _MAX_INT = sys.maxsize

+    basestring = str

+    unichr = chr

+    _ustr = str

+    alphas = string.ascii_lowercase + string.ascii_uppercase

+else:

+    _MAX_INT = sys.maxint

+    range = xrange

+    set = lambda s : dict( [(c,0) for c in s] )

+    alphas = string.lowercase + string.uppercase

+

+    def _ustr(obj):

+        """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries

+           str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It

+           then < returns the unicode object | encodes it with the default encoding | ... >.

+        """

+        if isinstance(obj,unicode):

+            return obj

+

+        try:

+            # If this works, then _ustr(obj) has the same behaviour as str(obj), so

+            # it won't break any existing code.

+            return str(obj)

+

+        except UnicodeEncodeError:

+            # The Python docs (http://docs.python.org/ref/customization.html#l2h-182)

+            # state that "The return value must be a string object". However, does a

+            # unicode object (being a subclass of basestring) count as a "string

+            # object"?

+            # If so, then return a unicode object:

+            return unicode(obj)

+            # Else encode it... but how? There are many choices... :)

+            # Replace unprintables with escape codes?

+            #return unicode(obj).encode(sys.getdefaultencoding(), 'backslashreplace_errors')

+            # Replace unprintables with question marks?

+            #return unicode(obj).encode(sys.getdefaultencoding(), 'replace')

+            # ...

+            

+    alphas = string.lowercase + string.uppercase

+

+# build list of single arg builtins, tolerant of Python version, that can be used as parse actions

+singleArgBuiltins = []

+import __builtin__

+for fname in "sum len enumerate sorted reversed list tuple set any all".split():

+    try:

+        singleArgBuiltins.append(getattr(__builtin__,fname))

+    except AttributeError:

+        continue

+

+def _xml_escape(data):

+    """Escape &, <, >, ", ', etc. in a string of data."""

+

+    # ampersand must be replaced first

+    from_symbols = '&><"\''

+    to_symbols = ['&'+s+';' for s in "amp gt lt quot apos".split()]

+    for from_,to_ in zip(from_symbols, to_symbols):

+        data = data.replace(from_, to_)

+    return data

+

+class _Constants(object):

+    pass

+

+nums       = string.digits

+hexnums    = nums + "ABCDEFabcdef"

+alphanums  = alphas + nums

+_bslash    = chr(92)

+printables = "".join( [ c for c in string.printable if c not in string.whitespace ] )

+

+class ParseBaseException(Exception):

+    """base exception class for all parsing runtime exceptions"""

+    # Performance tuning: we construct a *lot* of these, so keep this

+    # constructor as small and fast as possible

+    def __init__( self, pstr, loc=0, msg=None, elem=None ):

+        self.loc = loc

+        if msg is None:

+            self.msg = pstr

+            self.pstr = ""

+        else:

+            self.msg = msg

+            self.pstr = pstr

+        self.parserElement = elem

+

+    def __getattr__( self, aname ):

+        """supported attributes by name are:

+            - lineno - returns the line number of the exception text

+            - col - returns the column number of the exception text

+            - line - returns the line containing the exception text

+        """

+        if( aname == "lineno" ):

+            return lineno( self.loc, self.pstr )

+        elif( aname in ("col", "column") ):

+            return col( self.loc, self.pstr )

+        elif( aname == "line" ):

+            return line( self.loc, self.pstr )

+        else:

+            raise AttributeError(aname)

+

+    def __str__( self ):

+        return "%s (at char %d), (line:%d, col:%d)" % \

+                ( self.msg, self.loc, self.lineno, self.column )

+    def __repr__( self ):

+        return _ustr(self)

+    def markInputline( self, markerString = ">!<" ):

+        """Extracts the exception line from the input string, and marks

+           the location of the exception with a special symbol.

+        """

+        line_str = self.line

+        line_column = self.column - 1

+        if markerString:

+            line_str = "".join( [line_str[:line_column],

+                                markerString, line_str[line_column:]])

+        return line_str.strip()

+    def __dir__(self):

+        return "loc msg pstr parserElement lineno col line " \

+               "markInputLine __str__ __repr__".split()

+

+class ParseException(ParseBaseException):

+    """exception thrown when parse expressions don't match class;

+       supported attributes by name are:

+        - lineno - returns the line number of the exception text

+        - col - returns the column number of the exception text

+        - line - returns the line containing the exception text

+    """

+    pass

+

+class ParseFatalException(ParseBaseException):

+    """user-throwable exception thrown when inconsistent parse content

+       is found; stops all parsing immediately"""

+    pass

+

+class ParseSyntaxException(ParseFatalException):

+    """just like C{ParseFatalException}, but thrown internally when an

+       C{ErrorStop} ('-' operator) indicates that parsing is to stop immediately because

+       an unbacktrackable syntax error has been found"""

+    def __init__(self, pe):

+        super(ParseSyntaxException, self).__init__(

+                                    pe.pstr, pe.loc, pe.msg, pe.parserElement)

+

+#~ class ReparseException(ParseBaseException):

+    #~ """Experimental class - parse actions can raise this exception to cause

+       #~ pyparsing to reparse the input string:

+        #~ - with a modified input string, and/or

+        #~ - with a modified start location

+       #~ Set the values of the ReparseException in the constructor, and raise the

+       #~ exception in a parse action to cause pyparsing to use the new string/location.

+       #~ Setting the values as None causes no change to be made.

+       #~ """

+    #~ def __init_( self, newstring, restartLoc ):

+        #~ self.newParseText = newstring

+        #~ self.reparseLoc = restartLoc

+

+class RecursiveGrammarException(Exception):

+    """exception thrown by C{validate()} if the grammar could be improperly recursive"""

+    def __init__( self, parseElementList ):

+        self.parseElementTrace = parseElementList

+

+    def __str__( self ):

+        return "RecursiveGrammarException: %s" % self.parseElementTrace

+

+class _ParseResultsWithOffset(object):

+    def __init__(self,p1,p2):

+        self.tup = (p1,p2)

+    def __getitem__(self,i):

+        return self.tup[i]

+    def __repr__(self):

+        return repr(self.tup)

+    def setOffset(self,i):

+        self.tup = (self.tup[0],i)

+

+class ParseResults(object):

+    """Structured parse results, to provide multiple means of access to the parsed data:

+       - as a list (C{len(results)})

+       - by list index (C{results[0], results[1]}, etc.)

+       - by attribute (C{results.<resultsName>})

+       """

+    #~ __slots__ = ( "__toklist", "__tokdict", "__doinit", "__name", "__parent", "__accumNames", "__weakref__" )

+    def __new__(cls, toklist, name=None, asList=True, modal=True ):

+        if isinstance(toklist, cls):

+            return toklist

+        retobj = object.__new__(cls)

+        retobj.__doinit = True

+        return retobj

+

+    # Performance tuning: we construct a *lot* of these, so keep this

+    # constructor as small and fast as possible

+    def __init__( self, toklist, name=None, asList=True, modal=True, isinstance=isinstance ):

+        if self.__doinit:

+            self.__doinit = False

+            self.__name = None

+            self.__parent = None

+            self.__accumNames = {}

+            if isinstance(toklist, list):

+                self.__toklist = toklist[:]

+            else:

+                self.__toklist = [toklist]

+            self.__tokdict = dict()

+

+        if name is not None and name:

+            if not modal:

+                self.__accumNames[name] = 0

+            if isinstance(name,int):

+                name = _ustr(name) # will always return a str, but use _ustr for consistency

+            self.__name = name

+            if not toklist in (None,'',[]):

+                if isinstance(toklist,basestring):

+                    toklist = [ toklist ]

+                if asList:

+                    if isinstance(toklist,ParseResults):

+                        self[name] = _ParseResultsWithOffset(toklist.copy(),0)

+                    else:

+                        self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0)

+                    self[name].__name = name

+                else:

+                    try:

+                        self[name] = toklist[0]

+                    except (KeyError,TypeError,IndexError):

+                        self[name] = toklist

+

+    def __getitem__( self, i ):

+        if isinstance( i, (int,slice) ):

+            return self.__toklist[i]

+        else:

+            if i not in self.__accumNames:

+                return self.__tokdict[i][-1][0]

+            else:

+                return ParseResults([ v[0] for v in self.__tokdict[i] ])

+

+    def __setitem__( self, k, v, isinstance=isinstance ):

+        if isinstance(v,_ParseResultsWithOffset):

+            self.__tokdict[k] = self.__tokdict.get(k,list()) + [v]

+            sub = v[0]

+        elif isinstance(k,int):

+            self.__toklist[k] = v

+            sub = v

+        else:

+            self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)]

+            sub = v

+        if isinstance(sub,ParseResults):

+            sub.__parent = wkref(self)

+

+    def __delitem__( self, i ):

+        if isinstance(i,(int,slice)):

+            mylen = len( self.__toklist )

+            del self.__toklist[i]

+

+            # convert int to slice

+            if isinstance(i, int):

+                if i < 0:

+                    i += mylen

+                i = slice(i, i+1)

+            # get removed indices

+            removed = list(range(*i.indices(mylen)))

+            removed.reverse()

+            # fixup indices in token dictionary

+            for name in self.__tokdict:

+                occurrences = self.__tokdict[name]

+                for j in removed:

+                    for k, (value, position) in enumerate(occurrences):

+                        occurrences[k] = _ParseResultsWithOffset(value, position - (position > j))

+        else:

+            del self.__tokdict[i]

+

+    def __contains__( self, k ):

+        return k in self.__tokdict

+

+    def __len__( self ): return len( self.__toklist )

+    def __bool__(self): return len( self.__toklist ) > 0

+    __nonzero__ = __bool__

+    def __iter__( self ): return iter( self.__toklist )

+    def __reversed__( self ): return iter( self.__toklist[::-1] )

+    def keys( self ):

+        """Returns all named result keys."""

+        return self.__tokdict.keys()

+

+    def pop( self, index=-1 ):

+        """Removes and returns item at specified index (default=last).

+           Will work with either numeric indices or dict-key indicies."""

+        ret = self[index]

+        del self[index]

+        return ret

+

+    def get(self, key, defaultValue=None):

+        """Returns named result matching the given key, or if there is no

+           such name, then returns the given C{defaultValue} or C{None} if no

+           C{defaultValue} is specified."""

+        if key in self:

+            return self[key]

+        else:

+            return defaultValue

+

+    def insert( self, index, insStr ):

+        """Inserts new element at location index in the list of parsed tokens."""

+        self.__toklist.insert(index, insStr)

+        # fixup indices in token dictionary

+        for name in self.__tokdict:

+            occurrences = self.__tokdict[name]

+            for k, (value, position) in enumerate(occurrences):

+                occurrences[k] = _ParseResultsWithOffset(value, position + (position > index))

+

+    def items( self ):

+        """Returns all named result keys and values as a list of tuples."""

+        return [(k,self[k]) for k in self.__tokdict]

+

+    def values( self ):

+        """Returns all named result values."""

+        return [ v[-1][0] for v in self.__tokdict.values() ]

+

+    def __getattr__( self, name ):

+        if True: #name not in self.__slots__:

+            if name in self.__tokdict:

+                if name not in self.__accumNames:

+                    return self.__tokdict[name][-1][0]

+                else:

+                    return ParseResults([ v[0] for v in self.__tokdict[name] ])

+            else:

+                return ""

+        return None

+

+    def __add__( self, other ):

+        ret = self.copy()

+        ret += other

+        return ret

+

+    def __iadd__( self, other ):

+        if other.__tokdict:

+            offset = len(self.__toklist)

+            addoffset = ( lambda a: (a<0 and offset) or (a+offset) )

+            otheritems = other.__tokdict.items()

+            otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) )

+                                for (k,vlist) in otheritems for v in vlist]

+            for k,v in otherdictitems:

+                self[k] = v

+                if isinstance(v[0],ParseResults):

+                    v[0].__parent = wkref(self)

+            

+        self.__toklist += other.__toklist

+        self.__accumNames.update( other.__accumNames )

+        return self

+

+    def __radd__(self, other):

+        if isinstance(other,int) and other == 0:

+            return self.copy()

+        

+    def __repr__( self ):

+        return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) )

+

+    def __str__( self ):

+        out = "["

+        sep = ""

+        for i in self.__toklist:

+            if isinstance(i, ParseResults):

+                out += sep + _ustr(i)

+            else:

+                out += sep + repr(i)

+            sep = ", "

+        out += "]"

+        return out

+

+    def _asStringList( self, sep='' ):

+        out = []

+        for item in self.__toklist:

+            if out and sep:

+                out.append(sep)

+            if isinstance( item, ParseResults ):

+                out += item._asStringList()

+            else:

+                out.append( _ustr(item) )

+        return out

+

+    def asList( self ):

+        """Returns the parse results as a nested list of matching tokens, all converted to strings."""

+        out = []

+        for res in self.__toklist:

+            if isinstance(res,ParseResults):

+                out.append( res.asList() )

+            else:

+                out.append( res )

+        return out

+

+    def asDict( self ):

+        """Returns the named parse results as dictionary."""

+        return dict( self.items() )

+

+    def copy( self ):

+        """Returns a new copy of a C{ParseResults} object."""

+        ret = ParseResults( self.__toklist )

+        ret.__tokdict = self.__tokdict.copy()

+        ret.__parent = self.__parent

+        ret.__accumNames.update( self.__accumNames )

+        ret.__name = self.__name

+        return ret

+

+    def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ):

+        """Returns the parse results as XML. Tags are created for tokens and lists that have defined results names."""

+        nl = "\n"

+        out = []

+        namedItems = dict( [ (v[1],k) for (k,vlist) in self.__tokdict.items()

+                                                            for v in vlist ] )

+        nextLevelIndent = indent + "  "

+

+        # collapse out indents if formatting is not desired

+        if not formatted:

+            indent = ""

+            nextLevelIndent = ""

+            nl = ""

+

+        selfTag = None

+        if doctag is not None:

+            selfTag = doctag

+        else:

+            if self.__name:

+                selfTag = self.__name

+

+        if not selfTag:

+            if namedItemsOnly:

+                return ""

+            else:

+                selfTag = "ITEM"

+

+        out += [ nl, indent, "<", selfTag, ">" ]

+

+        worklist = self.__toklist

+        for i,res in enumerate(worklist):

+            if isinstance(res,ParseResults):

+                if i in namedItems:

+                    out += [ res.asXML(namedItems[i],

+                                        namedItemsOnly and doctag is None,

+                                        nextLevelIndent,

+                                        formatted)]

+                else:

+                    out += [ res.asXML(None,

+                                        namedItemsOnly and doctag is None,

+                                        nextLevelIndent,

+                                        formatted)]

+            else:

+                # individual token, see if there is a name for it

+                resTag = None

+                if i in namedItems:

+                    resTag = namedItems[i]

+                if not resTag:

+                    if namedItemsOnly:

+                        continue

+                    else:

+                        resTag = "ITEM"

+                xmlBodyText = _xml_escape(_ustr(res))

+                out += [ nl, nextLevelIndent, "<", resTag, ">",

+                                                xmlBodyText,

+                                                "</", resTag, ">" ]

+

+        out += [ nl, indent, "</", selfTag, ">" ]

+        return "".join(out)

+

+    def __lookup(self,sub):

+        for k,vlist in self.__tokdict.items():

+            for v,loc in vlist:

+                if sub is v:

+                    return k

+        return None

+

+    def getName(self):

+        """Returns the results name for this token expression."""

+        if self.__name:

+            return self.__name

+        elif self.__parent:

+            par = self.__parent()

+            if par:

+                return par.__lookup(self)

+            else:

+                return None

+        elif (len(self) == 1 and

+               len(self.__tokdict) == 1 and

+               self.__tokdict.values()[0][0][1] in (0,-1)):

+            return self.__tokdict.keys()[0]

+        else:

+            return None

+

+    def dump(self,indent='',depth=0):

+        """Diagnostic method for listing out the contents of a C{ParseResults}.

+           Accepts an optional C{indent} argument so that this string can be embedded

+           in a nested display of other data."""

+        out = []

+        out.append( indent+_ustr(self.asList()) )

+        keys = self.items()

+        keys.sort()

+        for k,v in keys:

+            if out:

+                out.append('\n')

+            out.append( "%s%s- %s: " % (indent,('  '*depth), k) )

+            if isinstance(v,ParseResults):

+                if v.keys():

+                    out.append( v.dump(indent,depth+1) )

+                else:

+                    out.append(_ustr(v))

+            else:

+                out.append(_ustr(v))

+        return "".join(out)

+

+    # add support for pickle protocol

+    def __getstate__(self):

+        return ( self.__toklist,

+                 ( self.__tokdict.copy(),

+                   self.__parent is not None and self.__parent() or None,

+                   self.__accumNames,

+                   self.__name ) )

+

+    def __setstate__(self,state):

+        self.__toklist = state[0]

+        (self.__tokdict,

+         par,

+         inAccumNames,

+         self.__name) = state[1]

+        self.__accumNames = {}

+        self.__accumNames.update(inAccumNames)

+        if par is not None:

+            self.__parent = wkref(par)

+        else:

+            self.__parent = None

+

+    def __dir__(self):

+        return dir(super(ParseResults,self)) + self.keys()

+

+def col (loc,strg):

+    """Returns current column within a string, counting newlines as line separators.

+   The first column is number 1.

+

+   Note: the default parsing behavior is to expand tabs in the input string

+   before starting the parsing process.  See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information

+   on parsing strings containing <TAB>s, and suggested methods to maintain a

+   consistent view of the parsed string, the parse location, and line and column

+   positions within the parsed string.

+   """

+    return (loc<len(strg) and strg[loc] == '\n') and 1 or loc - strg.rfind("\n", 0, loc)

+

+def lineno(loc,strg):

+    """Returns current line number within a string, counting newlines as line separators.

+   The first line is number 1.

+

+   Note: the default parsing behavior is to expand tabs in the input string

+   before starting the parsing process.  See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information

+   on parsing strings containing <TAB>s, and suggested methods to maintain a

+   consistent view of the parsed string, the parse location, and line and column

+   positions within the parsed string.

+   """

+    return strg.count("\n",0,loc) + 1

+

+def line( loc, strg ):

+    """Returns the line of text containing loc within a string, counting newlines as line separators.

+       """

+    lastCR = strg.rfind("\n", 0, loc)

+    nextCR = strg.find("\n", loc)

+    if nextCR >= 0:

+        return strg[lastCR+1:nextCR]

+    else:

+        return strg[lastCR+1:]

+

+def _defaultStartDebugAction( instring, loc, expr ):

+    print ("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))

+

+def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ):

+    print ("Matched " + _ustr(expr) + " -> " + str(toks.asList()))

+

+def _defaultExceptionDebugAction( instring, loc, expr, exc ):

+    print ("Exception raised:" + _ustr(exc))

+

+def nullDebugAction(*args):

+    """'Do-nothing' debug action, to suppress debugging output during parsing."""

+    pass

+

+'decorator to trim function calls to match the arity of the target'

+if not _PY3K:

+    def _trim_arity(func, maxargs=2):

+        limit = [0]

+        def wrapper(*args):

+            while 1:

+                try:

+                    return func(*args[limit[0]:])

+                except TypeError:

+                    if limit[0] <= maxargs:

+                        limit[0] += 1

+                        continue

+                    raise

+        return wrapper

+else:

+    def _trim_arity(func, maxargs=2):

+        limit = maxargs

+        def wrapper(*args):

+            #~ nonlocal limit

+            while 1:

+                try:

+                    return func(*args[limit:])

+                except TypeError:

+                    if limit:

+                        limit -= 1

+                        continue

+                    raise

+        return wrapper

+    

+class ParserElement(object):

+    """Abstract base level parser element class."""

+    DEFAULT_WHITE_CHARS = " \n\t\r"

+    verbose_stacktrace = False

+

+    def setDefaultWhitespaceChars( chars ):

+        """Overrides the default whitespace chars

+        """

+        ParserElement.DEFAULT_WHITE_CHARS = chars

+    setDefaultWhitespaceChars = staticmethod(setDefaultWhitespaceChars)

+

+    def __init__( self, savelist=False ):

+        self.parseAction = list()

+        self.failAction = None

+        #~ self.name = "<unknown>"  # don't define self.name, let subclasses try/except upcall

+        self.strRepr = None

+        self.resultsName = None

+        self.saveAsList = savelist

+        self.skipWhitespace = True

+        self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS

+        self.copyDefaultWhiteChars = True

+        self.mayReturnEmpty = False # used when checking for left-recursion

+        self.keepTabs = False

+        self.ignoreExprs = list()

+        self.debug = False

+        self.streamlined = False

+        self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index

+        self.errmsg = ""

+        self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all)

+        self.debugActions = ( None, None, None ) #custom debug actions

+        self.re = None

+        self.callPreparse = True # used to avoid redundant calls to preParse

+        self.callDuringTry = False

+

+    def copy( self ):

+        """Make a copy of this C{ParserElement}.  Useful for defining different parse actions

+           for the same parsing pattern, using copies of the original parse element."""

+        cpy = copy.copy( self )

+        cpy.parseAction = self.parseAction[:]

+        cpy.ignoreExprs = self.ignoreExprs[:]

+        if self.copyDefaultWhiteChars:

+            cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS

+        return cpy

+

+    def setName( self, name ):

+        """Define name for this expression, for use in debugging."""

+        self.name = name

+        self.errmsg = "Expected " + self.name

+        if hasattr(self,"exception"):

+            self.exception.msg = self.errmsg

+        return self

+

+    def setResultsName( self, name, listAllMatches=False ):

+        """Define name for referencing matching tokens as a nested attribute

+           of the returned parse results.

+           NOTE: this returns a *copy* of the original C{ParserElement} object;

+           this is so that the client can define a basic element, such as an

+           integer, and reference it in multiple places with different names.

+           

+           You can also set results names using the abbreviated syntax,

+           C{expr("name")} in place of C{expr.setResultsName("name")} - 

+           see L{I{__call__}<__call__>}.

+        """

+        newself = self.copy()

+        if name.endswith("*"):

+            name = name[:-1]

+            listAllMatches=True

+        newself.resultsName = name

+        newself.modalResults = not listAllMatches

+        return newself

+

+    def setBreak(self,breakFlag = True):

+        """Method to invoke the Python pdb debugger when this element is

+           about to be parsed. Set C{breakFlag} to True to enable, False to

+           disable.

+        """

+        if breakFlag:

+            _parseMethod = self._parse

+            def breaker(instring, loc, doActions=True, callPreParse=True):

+                import pdb

+                pdb.set_trace()

+                return _parseMethod( instring, loc, doActions, callPreParse )

+            breaker._originalParseMethod = _parseMethod

+            self._parse = breaker

+        else:

+            if hasattr(self._parse,"_originalParseMethod"):

+                self._parse = self._parse._originalParseMethod

+        return self

+

+    def setParseAction( self, *fns, **kwargs ):

+        """Define action to perform when successfully matching parse element definition.

+           Parse action fn is a callable method with 0-3 arguments, called as C{fn(s,loc,toks)},

+           C{fn(loc,toks)}, C{fn(toks)}, or just C{fn()}, where:

+            - s   = the original string being parsed (see note below)

+            - loc = the location of the matching substring

+            - toks = a list of the matched tokens, packaged as a ParseResults object

+           If the functions in fns modify the tokens, they can return them as the return

+           value from fn, and the modified list of tokens will replace the original.

+           Otherwise, fn does not need to return any value.

+

+           Note: the default parsing behavior is to expand tabs in the input string

+           before starting the parsing process.  See L{I{parseString}<parseString>} for more information

+           on parsing strings containing <TAB>s, and suggested methods to maintain a

+           consistent view of the parsed string, the parse location, and line and column

+           positions within the parsed string.

+           """

+        self.parseAction = list(map(_trim_arity, list(fns)))

+        self.callDuringTry = ("callDuringTry" in kwargs and kwargs["callDuringTry"])

+        return self

+

+    def addParseAction( self, *fns, **kwargs ):

+        """Add parse action to expression's list of parse actions. See L{I{setParseAction}<setParseAction>}."""

+        self.parseAction += list(map(_trim_arity, list(fns)))

+        self.callDuringTry = self.callDuringTry or ("callDuringTry" in kwargs and kwargs["callDuringTry"])

+        return self

+

+    def setFailAction( self, fn ):

+        """Define action to perform if parsing fails at this expression.

+           Fail acton fn is a callable function that takes the arguments

+           C{fn(s,loc,expr,err)} where:

+            - s = string being parsed

+            - loc = location where expression match was attempted and failed

+            - expr = the parse expression that failed

+            - err = the exception thrown

+           The function returns no value.  It may throw C{ParseFatalException}

+           if it is desired to stop parsing immediately."""

+        self.failAction = fn

+        return self

+

+    def _skipIgnorables( self, instring, loc ):

+        exprsFound = True

+        while exprsFound:

+            exprsFound = False

+            for e in self.ignoreExprs:

+                try:

+                    while 1:

+                        loc,dummy = e._parse( instring, loc )

+                        exprsFound = True

+                except ParseException:

+                    pass

+        return loc

+

+    def preParse( self, instring, loc ):

+        if self.ignoreExprs:

+            loc = self._skipIgnorables( instring, loc )

+

+        if self.skipWhitespace:

+            wt = self.whiteChars

+            instrlen = len(instring)

+            while loc < instrlen and instring[loc] in wt:

+                loc += 1

+

+        return loc

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        return loc, []

+

+    def postParse( self, instring, loc, tokenlist ):

+        return tokenlist

+

+    #~ @profile

+    def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ):

+        debugging = ( self.debug ) #and doActions )

+

+        if debugging or self.failAction:

+            #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))

+            if (self.debugActions[0] ):

+                self.debugActions[0]( instring, loc, self )

+            if callPreParse and self.callPreparse:

+                preloc = self.preParse( instring, loc )

+            else:

+                preloc = loc

+            tokensStart = preloc

+            try:

+                try:

+                    loc,tokens = self.parseImpl( instring, preloc, doActions )

+                except IndexError:

+                    raise ParseException( instring, len(instring), self.errmsg, self )

+            except ParseBaseException:

+                #~ print ("Exception raised:", err)

+                err = None

+                if self.debugActions[2]:

+                    err = sys.exc_info()[1]

+                    self.debugActions[2]( instring, tokensStart, self, err )

+                if self.failAction:

+                    if err is None:

+                        err = sys.exc_info()[1]

+                    self.failAction( instring, tokensStart, self, err )

+                raise

+        else:

+            if callPreParse and self.callPreparse:

+                preloc = self.preParse( instring, loc )

+            else:

+                preloc = loc

+            tokensStart = preloc

+            if self.mayIndexError or loc >= len(instring):

+                try:

+                    loc,tokens = self.parseImpl( instring, preloc, doActions )

+                except IndexError:

+                    raise ParseException( instring, len(instring), self.errmsg, self )

+            else:

+                loc,tokens = self.parseImpl( instring, preloc, doActions )

+

+        tokens = self.postParse( instring, loc, tokens )

+

+        retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults )

+        if self.parseAction and (doActions or self.callDuringTry):

+            if debugging:

+                try:

+                    for fn in self.parseAction:

+                        tokens = fn( instring, tokensStart, retTokens )

+                        if tokens is not None:

+                            retTokens = ParseResults( tokens,

+                                                      self.resultsName,

+                                                      asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),

+                                                      modal=self.modalResults )

+                except ParseBaseException:

+                    #~ print "Exception raised in user parse action:", err

+                    if (self.debugActions[2] ):

+                        err = sys.exc_info()[1]

+                        self.debugActions[2]( instring, tokensStart, self, err )

+                    raise

+            else:

+                for fn in self.parseAction:

+                    tokens = fn( instring, tokensStart, retTokens )

+                    if tokens is not None:

+                        retTokens = ParseResults( tokens,

+                                                  self.resultsName,

+                                                  asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),

+                                                  modal=self.modalResults )

+

+        if debugging:

+            #~ print ("Matched",self,"->",retTokens.asList())

+            if (self.debugActions[1] ):

+                self.debugActions[1]( instring, tokensStart, loc, self, retTokens )

+

+        return loc, retTokens

+

+    def tryParse( self, instring, loc ):

+        try:

+            return self._parse( instring, loc, doActions=False )[0]

+        except ParseFatalException:

+            raise ParseException( instring, loc, self.errmsg, self)

+

+    # this method gets repeatedly called during backtracking with the same arguments -

+    # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression

+    def _parseCache( self, instring, loc, doActions=True, callPreParse=True ):

+        lookup = (self,instring,loc,callPreParse,doActions)

+        if lookup in ParserElement._exprArgCache:

+            value = ParserElement._exprArgCache[ lookup ]

+            if isinstance(value, Exception):

+                raise value

+            return (value[0],value[1].copy())

+        else:

+            try:

+                value = self._parseNoCache( instring, loc, doActions, callPreParse )

+                ParserElement._exprArgCache[ lookup ] = (value[0],value[1].copy())

+                return value

+            except ParseBaseException:

+                pe = sys.exc_info()[1]

+                ParserElement._exprArgCache[ lookup ] = pe

+                raise

+

+    _parse = _parseNoCache

+

+    # argument cache for optimizing repeated calls when backtracking through recursive expressions

+    _exprArgCache = {}

+    def resetCache():

+        ParserElement._exprArgCache.clear()

+    resetCache = staticmethod(resetCache)

+

+    _packratEnabled = False

+    def enablePackrat():

+        """Enables "packrat" parsing, which adds memoizing to the parsing logic.

+           Repeated parse attempts at the same string location (which happens

+           often in many complex grammars) can immediately return a cached value,

+           instead of re-executing parsing/validating code.  Memoizing is done of

+           both valid results and parsing exceptions.

+

+           This speedup may break existing programs that use parse actions that

+           have side-effects.  For this reason, packrat parsing is disabled when

+           you first import pyparsing.  To activate the packrat feature, your

+           program must call the class method C{ParserElement.enablePackrat()}.  If

+           your program uses C{psyco} to "compile as you go", you must call

+           C{enablePackrat} before calling C{psyco.full()}.  If you do not do this,

+           Python will crash.  For best results, call C{enablePackrat()} immediately

+           after importing pyparsing.

+        """

+        if not ParserElement._packratEnabled:

+            ParserElement._packratEnabled = True

+            ParserElement._parse = ParserElement._parseCache

+    enablePackrat = staticmethod(enablePackrat)

+

+    def parseString( self, instring, parseAll=False ):

+        """Execute the parse expression with the given string.

+           This is the main interface to the client code, once the complete

+           expression has been built.

+

+           If you want the grammar to require that the entire input string be

+           successfully parsed, then set C{parseAll} to True (equivalent to ending

+           the grammar with C{StringEnd()}).

+

+           Note: C{parseString} implicitly calls C{expandtabs()} on the input string,

+           in order to report proper column numbers in parse actions.

+           If the input string contains tabs and

+           the grammar uses parse actions that use the C{loc} argument to index into the

+           string being parsed, you can ensure you have a consistent view of the input

+           string by:

+            - calling C{parseWithTabs} on your grammar before calling C{parseString}

+              (see L{I{parseWithTabs}<parseWithTabs>})

+            - define your parse action using the full C{(s,loc,toks)} signature, and

+              reference the input string using the parse action's C{s} argument

+            - explictly expand the tabs in your input string before calling

+              C{parseString}

+        """

+        ParserElement.resetCache()

+        if not self.streamlined:

+            self.streamline()

+            #~ self.saveAsList = True

+        for e in self.ignoreExprs:

+            e.streamline()

+        if not self.keepTabs:

+            instring = instring.expandtabs()

+        try:

+            loc, tokens = self._parse( instring, 0 )

+            if parseAll:

+                loc = self.preParse( instring, loc )

+                se = Empty() + StringEnd()

+                se._parse( instring, loc )

+        except ParseBaseException:

+            if ParserElement.verbose_stacktrace:

+                raise

+            else:

+                # catch and re-raise exception from here, clears out pyparsing internal stack trace

+                exc = sys.exc_info()[1]

+                raise exc

+        else:

+            return tokens

+

+    def scanString( self, instring, maxMatches=_MAX_INT, overlap=False ):

+        """Scan the input string for expression matches.  Each match will return the

+           matching tokens, start location, and end location.  May be called with optional

+           C{maxMatches} argument, to clip scanning after 'n' matches are found.  If

+           C{overlap} is specified, then overlapping matches will be reported.

+

+           Note that the start and end locations are reported relative to the string

+           being parsed.  See L{I{parseString}<parseString>} for more information on parsing

+           strings with embedded tabs."""

+        if not self.streamlined:

+            self.streamline()

+        for e in self.ignoreExprs:

+            e.streamline()

+

+        if not self.keepTabs:

+            instring = _ustr(instring).expandtabs()

+        instrlen = len(instring)

+        loc = 0

+        preparseFn = self.preParse

+        parseFn = self._parse

+        ParserElement.resetCache()

+        matches = 0

+        try:

+            while loc <= instrlen and matches < maxMatches:

+                try:

+                    preloc = preparseFn( instring, loc )

+                    nextLoc,tokens = parseFn( instring, preloc, callPreParse=False )

+                except ParseException:

+                    loc = preloc+1

+                else:

+                    if nextLoc > loc:

+                        matches += 1

+                        yield tokens, preloc, nextLoc

+                        if overlap:

+                            nextloc = preparseFn( instring, loc )

+                            if nextloc > loc:

+                                loc = nextLoc

+                            else:

+                                loc += 1

+                        else:

+                            loc = nextLoc

+                    else:

+                        loc = preloc+1

+        except ParseBaseException:

+            if ParserElement.verbose_stacktrace:

+                raise

+            else:

+                # catch and re-raise exception from here, clears out pyparsing internal stack trace

+                exc = sys.exc_info()[1]

+                raise exc

+

+    def transformString( self, instring ):

+        """Extension to C{scanString}, to modify matching text with modified tokens that may

+           be returned from a parse action.  To use C{transformString}, define a grammar and

+           attach a parse action to it that modifies the returned token list.

+           Invoking C{transformString()} on a target string will then scan for matches,

+           and replace the matched text patterns according to the logic in the parse

+           action.  C{transformString()} returns the resulting transformed string."""

+        out = []

+        lastE = 0

+        # force preservation of <TAB>s, to minimize unwanted transformation of string, and to

+        # keep string locs straight between transformString and scanString

+        self.keepTabs = True

+        try:

+            for t,s,e in self.scanString( instring ):

+                out.append( instring[lastE:s] )

+                if t:

+                    if isinstance(t,ParseResults):

+                        out += t.asList()

+                    elif isinstance(t,list):

+                        out += t

+                    else:

+                        out.append(t)

+                lastE = e

+            out.append(instring[lastE:])

+            out = [o for o in out if o]

+            return "".join(map(_ustr,_flatten(out)))

+        except ParseBaseException:

+            if ParserElement.verbose_stacktrace:

+                raise

+            else:

+                # catch and re-raise exception from here, clears out pyparsing internal stack trace

+                exc = sys.exc_info()[1]

+                raise exc

+

+    def searchString( self, instring, maxMatches=_MAX_INT ):

+        """Another extension to C{scanString}, simplifying the access to the tokens found

+           to match the given parse expression.  May be called with optional

+           C{maxMatches} argument, to clip searching after 'n' matches are found.

+        """

+        try:

+            return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ])

+        except ParseBaseException:

+            if ParserElement.verbose_stacktrace:

+                raise

+            else:

+                # catch and re-raise exception from here, clears out pyparsing internal stack trace

+                exc = sys.exc_info()[1]

+                raise exc

+

+    def __add__(self, other ):

+        """Implementation of + operator - returns And"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return And( [ self, other ] )

+

+    def __radd__(self, other ):

+        """Implementation of + operator when left operand is not a C{ParserElement}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return other + self

+

+    def __sub__(self, other):

+        """Implementation of - operator, returns C{And} with error stop"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return And( [ self, And._ErrorStop(), other ] )

+

+    def __rsub__(self, other ):

+        """Implementation of - operator when left operand is not a C{ParserElement}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return other - self

+

+    def __mul__(self,other):

+        """Implementation of * operator, allows use of C{expr * 3} in place of

+           C{expr + expr + expr}.  Expressions may also me multiplied by a 2-integer

+           tuple, similar to C{{min,max}} multipliers in regular expressions.  Tuples

+           may also include C{None} as in:

+            - C{expr*(n,None)} or C{expr*(n,)} is equivalent

+              to C{expr*n + ZeroOrMore(expr)}

+              (read as "at least n instances of C{expr}")

+            - C{expr*(None,n)} is equivalent to C{expr*(0,n)}

+              (read as "0 to n instances of C{expr}")

+            - C{expr*(None,None)} is equivalent to C{ZeroOrMore(expr)}

+            - C{expr*(1,None)} is equivalent to C{OneOrMore(expr)}

+

+           Note that C{expr*(None,n)} does not raise an exception if

+           more than n exprs exist in the input stream; that is,

+           C{expr*(None,n)} does not enforce a maximum number of expr

+           occurrences.  If this behavior is desired, then write

+           C{expr*(None,n) + ~expr}

+

+        """

+        if isinstance(other,int):

+            minElements, optElements = other,0

+        elif isinstance(other,tuple):

+            other = (other + (None, None))[:2]

+            if other[0] is None:

+                other = (0, other[1])

+            if isinstance(other[0],int) and other[1] is None:

+                if other[0] == 0:

+                    return ZeroOrMore(self)

+                if other[0] == 1:

+                    return OneOrMore(self)

+                else:

+                    return self*other[0] + ZeroOrMore(self)

+            elif isinstance(other[0],int) and isinstance(other[1],int):

+                minElements, optElements = other

+                optElements -= minElements

+            else:

+                raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1]))

+        else:

+            raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other))

+

+        if minElements < 0:

+            raise ValueError("cannot multiply ParserElement by negative value")

+        if optElements < 0:

+            raise ValueError("second tuple value must be greater or equal to first tuple value")

+        if minElements == optElements == 0:

+            raise ValueError("cannot multiply ParserElement by 0 or (0,0)")

+

+        if (optElements):

+            def makeOptionalList(n):

+                if n>1:

+                    return Optional(self + makeOptionalList(n-1))

+                else:

+                    return Optional(self)

+            if minElements:

+                if minElements == 1:

+                    ret = self + makeOptionalList(optElements)

+                else:

+                    ret = And([self]*minElements) + makeOptionalList(optElements)

+            else:

+                ret = makeOptionalList(optElements)

+        else:

+            if minElements == 1:

+                ret = self

+            else:

+                ret = And([self]*minElements)

+        return ret

+

+    def __rmul__(self, other):

+        return self.__mul__(other)

+

+    def __or__(self, other ):

+        """Implementation of | operator - returns C{MatchFirst}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return MatchFirst( [ self, other ] )

+

+    def __ror__(self, other ):

+        """Implementation of | operator when left operand is not a C{ParserElement}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return other | self

+

+    def __xor__(self, other ):

+        """Implementation of ^ operator - returns C{Or}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return Or( [ self, other ] )

+

+    def __rxor__(self, other ):

+        """Implementation of ^ operator when left operand is not a C{ParserElement}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return other ^ self

+

+    def __and__(self, other ):

+        """Implementation of & operator - returns C{Each}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return Each( [ self, other ] )

+

+    def __rand__(self, other ):

+        """Implementation of & operator when left operand is not a C{ParserElement}"""

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        if not isinstance( other, ParserElement ):

+            warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),

+                    SyntaxWarning, stacklevel=2)

+            return None

+        return other & self

+

+    def __invert__( self ):

+        """Implementation of ~ operator - returns C{NotAny}"""

+        return NotAny( self )

+

+    def __call__(self, name):

+        """Shortcut for C{setResultsName}, with C{listAllMatches=default}::

+             userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno")

+           could be written as::

+             userdata = Word(alphas)("name") + Word(nums+"-")("socsecno")

+             

+           If C{name} is given with a trailing C{'*'} character, then C{listAllMatches} will be

+           passed as C{True}.

+           """

+        return self.setResultsName(name)

+

+    def suppress( self ):

+        """Suppresses the output of this C{ParserElement}; useful to keep punctuation from

+           cluttering up returned output.

+        """

+        return Suppress( self )

+

+    def leaveWhitespace( self ):

+        """Disables the skipping of whitespace before matching the characters in the

+           C{ParserElement}'s defined pattern.  This is normally only used internally by

+           the pyparsing module, but may be needed in some whitespace-sensitive grammars.

+        """

+        self.skipWhitespace = False

+        return self

+

+    def setWhitespaceChars( self, chars ):

+        """Overrides the default whitespace chars

+        """

+        self.skipWhitespace = True

+        self.whiteChars = chars

+        self.copyDefaultWhiteChars = False

+        return self

+

+    def parseWithTabs( self ):

+        """Overrides default behavior to expand C{<TAB>}s to spaces before parsing the input string.

+           Must be called before C{parseString} when the input grammar contains elements that

+           match C{<TAB>} characters."""

+        self.keepTabs = True

+        return self

+

+    def ignore( self, other ):

+        """Define expression to be ignored (e.g., comments) while doing pattern

+           matching; may be called repeatedly, to define multiple comment or other

+           ignorable patterns.

+        """

+        if isinstance( other, Suppress ):

+            if other not in self.ignoreExprs:

+                self.ignoreExprs.append( other.copy() )

+        else:

+            self.ignoreExprs.append( Suppress( other.copy() ) )

+        return self

+

+    def setDebugActions( self, startAction, successAction, exceptionAction ):

+        """Enable display of debugging messages while doing pattern matching."""

+        self.debugActions = (startAction or _defaultStartDebugAction,

+                             successAction or _defaultSuccessDebugAction,

+                             exceptionAction or _defaultExceptionDebugAction)

+        self.debug = True

+        return self

+

+    def setDebug( self, flag=True ):

+        """Enable display of debugging messages while doing pattern matching.

+           Set C{flag} to True to enable, False to disable."""

+        if flag:

+            self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction )

+        else:

+            self.debug = False

+        return self

+

+    def __str__( self ):

+        return self.name

+

+    def __repr__( self ):

+        return _ustr(self)

+

+    def streamline( self ):

+        self.streamlined = True

+        self.strRepr = None

+        return self

+

+    def checkRecursion( self, parseElementList ):

+        pass

+

+    def validate( self, validateTrace=[] ):

+        """Check defined expressions for valid structure, check for infinite recursive definitions."""

+        self.checkRecursion( [] )

+

+    def parseFile( self, file_or_filename, parseAll=False ):

+        """Execute the parse expression on the given file or filename.

+           If a filename is specified (instead of a file object),

+           the entire file is opened, read, and closed before parsing.

+        """

+        try:

+            file_contents = file_or_filename.read()

+        except AttributeError:

+            f = open(file_or_filename, "rb")

+            file_contents = f.read()

+            f.close()

+        try:

+            return self.parseString(file_contents, parseAll)

+        except ParseBaseException:

+            # catch and re-raise exception from here, clears out pyparsing internal stack trace

+            exc = sys.exc_info()[1]

+            raise exc

+

+    def getException(self):

+        return ParseException("",0,self.errmsg,self)

+

+    def __getattr__(self,aname):

+        if aname == "myException":

+            self.myException = ret = self.getException();

+            return ret;

+        else:

+            raise AttributeError("no such attribute " + aname)

+

+    def __eq__(self,other):

+        if isinstance(other, ParserElement):

+            return self is other or self.__dict__ == other.__dict__

+        elif isinstance(other, basestring):

+            try:

+                self.parseString(_ustr(other), parseAll=True)

+                return True

+            except ParseBaseException:

+                return False

+        else:

+            return super(ParserElement,self)==other

+

+    def __ne__(self,other):

+        return not (self == other)

+

+    def __hash__(self):

+        return hash(id(self))

+

+    def __req__(self,other):

+        return self == other

+

+    def __rne__(self,other):

+        return not (self == other)

+

+

+class Token(ParserElement):

+    """Abstract C{ParserElement} subclass, for defining atomic matching patterns."""

+    def __init__( self ):

+        super(Token,self).__init__( savelist=False )

+

+    def setName(self, name):

+        s = super(Token,self).setName(name)

+        self.errmsg = "Expected " + self.name

+        return s

+

+

+class Empty(Token):

+    """An empty token, will always match."""

+    def __init__( self ):

+        super(Empty,self).__init__()

+        self.name = "Empty"

+        self.mayReturnEmpty = True

+        self.mayIndexError = False

+

+

+class NoMatch(Token):

+    """A token that will never match."""

+    def __init__( self ):

+        super(NoMatch,self).__init__()

+        self.name = "NoMatch"

+        self.mayReturnEmpty = True

+        self.mayIndexError = False

+        self.errmsg = "Unmatchable token"

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+

+

+class Literal(Token):

+    """Token to exactly match a specified string."""

+    def __init__( self, matchString ):

+        super(Literal,self).__init__()

+        self.match = matchString

+        self.matchLen = len(matchString)

+        try:

+            self.firstMatchChar = matchString[0]

+        except IndexError:

+            warnings.warn("null string passed to Literal; use Empty() instead",

+                            SyntaxWarning, stacklevel=2)

+            self.__class__ = Empty

+        self.name = '"%s"' % _ustr(self.match)

+        self.errmsg = "Expected " + self.name

+        self.mayReturnEmpty = False

+        self.mayIndexError = False

+

+    # Performance tuning: this routine gets called a *lot*

+    # if this is a single character match string  and the first character matches,

+    # short-circuit as quickly as possible, and avoid calling startswith

+    #~ @profile

+    def parseImpl( self, instring, loc, doActions=True ):

+        if (instring[loc] == self.firstMatchChar and

+            (self.matchLen==1 or instring.startswith(self.match,loc)) ):

+            return loc+self.matchLen, self.match

+        #~ raise ParseException( instring, loc, self.errmsg )

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+_L = Literal

+

+class Keyword(Token):

+    """Token to exactly match a specified string as a keyword, that is, it must be

+       immediately followed by a non-keyword character.  Compare with C{Literal}::

+         Literal("if") will match the leading C{'if'} in C{'ifAndOnlyIf'}.

+         Keyword("if") will not; it will only match the leading C{'if'} in C{'if x=1'}, or C{'if(y==2)'}

+       Accepts two optional constructor arguments in addition to the keyword string:

+       C{identChars} is a string of characters that would be valid identifier characters,

+       defaulting to all alphanumerics + "_" and "$"; C{caseless} allows case-insensitive

+       matching, default is C{False}.

+    """

+    DEFAULT_KEYWORD_CHARS = alphanums+"_$"

+

+    def __init__( self, matchString, identChars=DEFAULT_KEYWORD_CHARS, caseless=False ):

+        super(Keyword,self).__init__()

+        self.match = matchString

+        self.matchLen = len(matchString)

+        try:

+            self.firstMatchChar = matchString[0]

+        except IndexError:

+            warnings.warn("null string passed to Keyword; use Empty() instead",

+                            SyntaxWarning, stacklevel=2)

+        self.name = '"%s"' % self.match

+        self.errmsg = "Expected " + self.name

+        self.mayReturnEmpty = False

+        self.mayIndexError = False

+        self.caseless = caseless

+        if caseless:

+            self.caselessmatch = matchString.upper()

+            identChars = identChars.upper()

+        self.identChars = set(identChars)

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if self.caseless:

+            if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and

+                 (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and

+                 (loc == 0 or instring[loc-1].upper() not in self.identChars) ):

+                return loc+self.matchLen, self.match

+        else:

+            if (instring[loc] == self.firstMatchChar and

+                (self.matchLen==1 or instring.startswith(self.match,loc)) and

+                (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and

+                (loc == 0 or instring[loc-1] not in self.identChars) ):

+                return loc+self.matchLen, self.match

+        #~ raise ParseException( instring, loc, self.errmsg )

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+

+    def copy(self):

+        c = super(Keyword,self).copy()

+        c.identChars = Keyword.DEFAULT_KEYWORD_CHARS

+        return c

+

+    def setDefaultKeywordChars( chars ):

+        """Overrides the default Keyword chars

+        """

+        Keyword.DEFAULT_KEYWORD_CHARS = chars

+    setDefaultKeywordChars = staticmethod(setDefaultKeywordChars)

+

+class CaselessLiteral(Literal):

+    """Token to match a specified string, ignoring case of letters.

+       Note: the matched results will always be in the case of the given

+       match string, NOT the case of the input text.

+    """

+    def __init__( self, matchString ):

+        super(CaselessLiteral,self).__init__( matchString.upper() )

+        # Preserve the defining literal.

+        self.returnString = matchString

+        self.name = "'%s'" % self.returnString

+        self.errmsg = "Expected " + self.name

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if instring[ loc:loc+self.matchLen ].upper() == self.match:

+            return loc+self.matchLen, self.returnString

+        #~ raise ParseException( instring, loc, self.errmsg )

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+

+class CaselessKeyword(Keyword):

+    def __init__( self, matchString, identChars=Keyword.DEFAULT_KEYWORD_CHARS ):

+        super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True )

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and

+             (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ):

+            return loc+self.matchLen, self.match

+        #~ raise ParseException( instring, loc, self.errmsg )

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+

+class Word(Token):

+    """Token for matching words composed of allowed character sets.

+       Defined with string containing all allowed initial characters,

+       an optional string containing allowed body characters (if omitted,

+       defaults to the initial character set), and an optional minimum,

+       maximum, and/or exact length.  The default value for C{min} is 1 (a

+       minimum value < 1 is not valid); the default values for C{max} and C{exact}

+       are 0, meaning no maximum or exact length restriction. An optional

+       C{exclude} parameter can list characters that might be found in 

+       the input C{bodyChars} string; useful to define a word of all printables

+       except for one or two characters, for instance.

+    """

+    def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False, excludeChars=None ):

+        super(Word,self).__init__()

+        if excludeChars:

+            initChars = ''.join([c for c in initChars if c not in excludeChars])

+            if bodyChars:

+                bodyChars = ''.join([c for c in bodyChars if c not in excludeChars])

+        self.initCharsOrig = initChars

+        self.initChars = set(initChars)

+        if bodyChars :

+            self.bodyCharsOrig = bodyChars

+            self.bodyChars = set(bodyChars)

+        else:

+            self.bodyCharsOrig = initChars

+            self.bodyChars = set(initChars)

+

+        self.maxSpecified = max > 0

+

+        if min < 1:

+            raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted")

+

+        self.minLen = min

+

+        if max > 0:

+            self.maxLen = max

+        else:

+            self.maxLen = _MAX_INT

+

+        if exact > 0:

+            self.maxLen = exact

+            self.minLen = exact

+

+        self.name = _ustr(self)

+        self.errmsg = "Expected " + self.name

+        self.mayIndexError = False

+        self.asKeyword = asKeyword

+

+        if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0):

+            if self.bodyCharsOrig == self.initCharsOrig:

+                self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig)

+            elif len(self.bodyCharsOrig) == 1:

+                self.reString = "%s[%s]*" % \

+                                      (re.escape(self.initCharsOrig),

+                                      _escapeRegexRangeChars(self.bodyCharsOrig),)

+            else:

+                self.reString = "[%s][%s]*" % \

+                                      (_escapeRegexRangeChars(self.initCharsOrig),

+                                      _escapeRegexRangeChars(self.bodyCharsOrig),)

+            if self.asKeyword:

+                self.reString = r"\b"+self.reString+r"\b"

+            try:

+                self.re = re.compile( self.reString )

+            except:

+                self.re = None

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if self.re:

+            result = self.re.match(instring,loc)

+            if not result:

+                exc = self.myException

+                exc.loc = loc

+                exc.pstr = instring

+                raise exc

+

+            loc = result.end()

+            return loc, result.group()

+

+        if not(instring[ loc ] in self.initChars):

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+        start = loc

+        loc += 1

+        instrlen = len(instring)

+        bodychars = self.bodyChars

+        maxloc = start + self.maxLen

+        maxloc = min( maxloc, instrlen )

+        while loc < maxloc and instring[loc] in bodychars:

+            loc += 1

+

+        throwException = False

+        if loc - start < self.minLen:

+            throwException = True

+        if self.maxSpecified and loc < instrlen and instring[loc] in bodychars:

+            throwException = True

+        if self.asKeyword:

+            if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars):

+                throwException = True

+

+        if throwException:

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        return loc, instring[start:loc]

+

+    def __str__( self ):

+        try:

+            return super(Word,self).__str__()

+        except:

+            pass

+

+

+        if self.strRepr is None:

+

+            def charsAsStr(s):

+                if len(s)>4:

+                    return s[:4]+"..."

+                else:

+                    return s

+

+            if ( self.initCharsOrig != self.bodyCharsOrig ):

+                self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) )

+            else:

+                self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig)

+

+        return self.strRepr

+

+

+class Regex(Token):

+    """Token for matching strings that match a given regular expression.

+       Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module.

+    """

+    compiledREtype = type(re.compile("[A-Z]"))

+    def __init__( self, pattern, flags=0):

+        """The parameters C{pattern} and C{flags} are passed to the C{re.compile()} function as-is. See the Python C{re} module for an explanation of the acceptable patterns and flags."""

+        super(Regex,self).__init__()

+

+        if isinstance(pattern, basestring):

+            if len(pattern) == 0:

+                warnings.warn("null string passed to Regex; use Empty() instead",

+                        SyntaxWarning, stacklevel=2)

+

+            self.pattern = pattern

+            self.flags = flags

+

+            try:

+                self.re = re.compile(self.pattern, self.flags)

+                self.reString = self.pattern

+            except sre_constants.error:

+                warnings.warn("invalid pattern (%s) passed to Regex" % pattern,

+                    SyntaxWarning, stacklevel=2)

+                raise

+

+        elif isinstance(pattern, Regex.compiledREtype):

+            self.re = pattern

+            self.pattern = \

+            self.reString = str(pattern)

+            self.flags = flags

+            

+        else:

+            raise ValueError("Regex may only be constructed with a string or a compiled RE object")

+

+        self.name = _ustr(self)

+        self.errmsg = "Expected " + self.name

+        self.mayIndexError = False

+        self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        result = self.re.match(instring,loc)

+        if not result:

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        loc = result.end()

+        d = result.groupdict()

+        ret = ParseResults(result.group())

+        if d:

+            for k in d:

+                ret[k] = d[k]

+        return loc,ret

+

+    def __str__( self ):

+        try:

+            return super(Regex,self).__str__()

+        except:

+            pass

+

+        if self.strRepr is None:

+            self.strRepr = "Re:(%s)" % repr(self.pattern)

+

+        return self.strRepr

+

+

+class QuotedString(Token):

+    """Token for matching strings that are delimited by quoting characters.

+    """

+    def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None):

+        """

+           Defined with the following parameters:

+            - quoteChar - string of one or more characters defining the quote delimiting string

+            - escChar - character to escape quotes, typically backslash (default=None)

+            - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=None)

+            - multiline - boolean indicating whether quotes can span multiple lines (default=False)

+            - unquoteResults - boolean indicating whether the matched text should be unquoted (default=True)

+            - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=None => same as quoteChar)

+        """

+        super(QuotedString,self).__init__()

+

+        # remove white space from quote chars - wont work anyway

+        quoteChar = quoteChar.strip()

+        if len(quoteChar) == 0:

+            warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)

+            raise SyntaxError()

+

+        if endQuoteChar is None:

+            endQuoteChar = quoteChar

+        else:

+            endQuoteChar = endQuoteChar.strip()

+            if len(endQuoteChar) == 0:

+                warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)

+                raise SyntaxError()

+

+        self.quoteChar = quoteChar

+        self.quoteCharLen = len(quoteChar)

+        self.firstQuoteChar = quoteChar[0]

+        self.endQuoteChar = endQuoteChar

+        self.endQuoteCharLen = len(endQuoteChar)

+        self.escChar = escChar

+        self.escQuote = escQuote

+        self.unquoteResults = unquoteResults

+

+        if multiline:

+            self.flags = re.MULTILINE | re.DOTALL

+            self.pattern = r'%s(?:[^%s%s]' % \

+                ( re.escape(self.quoteChar),

+                  _escapeRegexRangeChars(self.endQuoteChar[0]),

+                  (escChar is not None and _escapeRegexRangeChars(escChar) or '') )

+        else:

+            self.flags = 0

+            self.pattern = r'%s(?:[^%s\n\r%s]' % \

+                ( re.escape(self.quoteChar),

+                  _escapeRegexRangeChars(self.endQuoteChar[0]),

+                  (escChar is not None and _escapeRegexRangeChars(escChar) or '') )

+        if len(self.endQuoteChar) > 1:

+            self.pattern += (

+                '|(?:' + ')|(?:'.join(["%s[^%s]" % (re.escape(self.endQuoteChar[:i]),

+                                               _escapeRegexRangeChars(self.endQuoteChar[i]))

+                                    for i in range(len(self.endQuoteChar)-1,0,-1)]) + ')'

+                )

+        if escQuote:

+            self.pattern += (r'|(?:%s)' % re.escape(escQuote))

+        if escChar:

+            self.pattern += (r'|(?:%s.)' % re.escape(escChar))

+            charset = ''.join(set(self.quoteChar[0]+self.endQuoteChar[0])).replace('^',r'\^').replace('-',r'\-')

+            self.escCharReplacePattern = re.escape(self.escChar)+("([%s])" % charset)

+        self.pattern += (r')*%s' % re.escape(self.endQuoteChar))

+

+        try:

+            self.re = re.compile(self.pattern, self.flags)

+            self.reString = self.pattern

+        except sre_constants.error:

+            warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern,

+                SyntaxWarning, stacklevel=2)

+            raise

+

+        self.name = _ustr(self)

+        self.errmsg = "Expected " + self.name

+        self.mayIndexError = False

+        self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None

+        if not result:

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        loc = result.end()

+        ret = result.group()

+

+        if self.unquoteResults:

+

+            # strip off quotes

+            ret = ret[self.quoteCharLen:-self.endQuoteCharLen]

+

+            if isinstance(ret,basestring):

+                # replace escaped characters

+                if self.escChar:

+                    ret = re.sub(self.escCharReplacePattern,"\g<1>",ret)

+

+                # replace escaped quotes

+                if self.escQuote:

+                    ret = ret.replace(self.escQuote, self.endQuoteChar)

+

+        return loc, ret

+

+    def __str__( self ):

+        try:

+            return super(QuotedString,self).__str__()

+        except:

+            pass

+

+        if self.strRepr is None:

+            self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar)

+

+        return self.strRepr

+

+

+class CharsNotIn(Token):

+    """Token for matching words composed of characters *not* in a given set.

+       Defined with string containing all disallowed characters, and an optional

+       minimum, maximum, and/or exact length.  The default value for C{min} is 1 (a

+       minimum value < 1 is not valid); the default values for C{max} and C{exact}

+       are 0, meaning no maximum or exact length restriction.

+    """

+    def __init__( self, notChars, min=1, max=0, exact=0 ):

+        super(CharsNotIn,self).__init__()

+        self.skipWhitespace = False

+        self.notChars = notChars

+

+        if min < 1:

+            raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted")

+

+        self.minLen = min

+

+        if max > 0:

+            self.maxLen = max

+        else:

+            self.maxLen = _MAX_INT

+

+        if exact > 0:

+            self.maxLen = exact

+            self.minLen = exact

+

+        self.name = _ustr(self)

+        self.errmsg = "Expected " + self.name

+        self.mayReturnEmpty = ( self.minLen == 0 )

+        self.mayIndexError = False

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if instring[loc] in self.notChars:

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        start = loc

+        loc += 1

+        notchars = self.notChars

+        maxlen = min( start+self.maxLen, len(instring) )

+        while loc < maxlen and \

+              (instring[loc] not in notchars):

+            loc += 1

+

+        if loc - start < self.minLen:

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        return loc, instring[start:loc]

+

+    def __str__( self ):

+        try:

+            return super(CharsNotIn, self).__str__()

+        except:

+            pass

+

+        if self.strRepr is None:

+            if len(self.notChars) > 4:

+                self.strRepr = "!W:(%s...)" % self.notChars[:4]

+            else:

+                self.strRepr = "!W:(%s)" % self.notChars

+

+        return self.strRepr

+

+class White(Token):

+    """Special matching class for matching whitespace.  Normally, whitespace is ignored

+       by pyparsing grammars.  This class is included when some whitespace structures

+       are significant.  Define with a string containing the whitespace characters to be

+       matched; default is C{" \\t\\r\\n"}.  Also takes optional C{min}, C{max}, and C{exact} arguments,

+       as defined for the C{Word} class."""

+    whiteStrs = {

+        " " : "<SPC>",

+        "\t": "<TAB>",

+        "\n": "<LF>",

+        "\r": "<CR>",

+        "\f": "<FF>",

+        }

+    def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0):

+        super(White,self).__init__()

+        self.matchWhite = ws

+        self.setWhitespaceChars( "".join([c for c in self.whiteChars if c not in self.matchWhite]) )

+        #~ self.leaveWhitespace()

+        self.name = ("".join([White.whiteStrs[c] for c in self.matchWhite]))

+        self.mayReturnEmpty = True

+        self.errmsg = "Expected " + self.name

+

+        self.minLen = min

+

+        if max > 0:

+            self.maxLen = max

+        else:

+            self.maxLen = _MAX_INT

+

+        if exact > 0:

+            self.maxLen = exact

+            self.minLen = exact

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if not(instring[ loc ] in self.matchWhite):

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+        start = loc

+        loc += 1

+        maxloc = start + self.maxLen

+        maxloc = min( maxloc, len(instring) )

+        while loc < maxloc and instring[loc] in self.matchWhite:

+            loc += 1

+

+        if loc - start < self.minLen:

+            #~ raise ParseException( instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+        return loc, instring[start:loc]

+

+

+class _PositionToken(Token):

+    def __init__( self ):

+        super(_PositionToken,self).__init__()

+        self.name=self.__class__.__name__

+        self.mayReturnEmpty = True

+        self.mayIndexError = False

+

+class GoToColumn(_PositionToken):

+    """Token to advance to a specific column of input text; useful for tabular report scraping."""

+    def __init__( self, colno ):

+        super(GoToColumn,self).__init__()

+        self.col = colno

+

+    def preParse( self, instring, loc ):

+        if col(loc,instring) != self.col:

+            instrlen = len(instring)

+            if self.ignoreExprs:

+                loc = self._skipIgnorables( instring, loc )

+            while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col :

+                loc += 1

+        return loc

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        thiscol = col( loc, instring )

+        if thiscol > self.col:

+            raise ParseException( instring, loc, "Text not in expected column", self )

+        newloc = loc + self.col - thiscol

+        ret = instring[ loc: newloc ]

+        return newloc, ret

+

+class LineStart(_PositionToken):

+    """Matches if current position is at the beginning of a line within the parse string"""

+    def __init__( self ):

+        super(LineStart,self).__init__()

+        self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )

+        self.errmsg = "Expected start of line"

+

+    def preParse( self, instring, loc ):

+        preloc = super(LineStart,self).preParse(instring,loc)

+        if instring[preloc] == "\n":

+            loc += 1

+        return loc

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if not( loc==0 or

+            (loc == self.preParse( instring, 0 )) or

+            (instring[loc-1] == "\n") ): #col(loc, instring) != 1:

+            #~ raise ParseException( instring, loc, "Expected start of line" )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+        return loc, []

+

+class LineEnd(_PositionToken):

+    """Matches if current position is at the end of a line within the parse string"""

+    def __init__( self ):

+        super(LineEnd,self).__init__()

+        self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )

+        self.errmsg = "Expected end of line"

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if loc<len(instring):

+            if instring[loc] == "\n":

+                return loc+1, "\n"

+            else:

+                #~ raise ParseException( instring, loc, "Expected end of line" )

+                exc = self.myException

+                exc.loc = loc

+                exc.pstr = instring

+                raise exc

+        elif loc == len(instring):

+            return loc+1, []

+        else:

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+class StringStart(_PositionToken):

+    """Matches if current position is at the beginning of the parse string"""

+    def __init__( self ):

+        super(StringStart,self).__init__()

+        self.errmsg = "Expected start of text"

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if loc != 0:

+            # see if entire string up to here is just whitespace and ignoreables

+            if loc != self.preParse( instring, 0 ):

+                #~ raise ParseException( instring, loc, "Expected start of text" )

+                exc = self.myException

+                exc.loc = loc

+                exc.pstr = instring

+                raise exc

+        return loc, []

+

+class StringEnd(_PositionToken):

+    """Matches if current position is at the end of the parse string"""

+    def __init__( self ):

+        super(StringEnd,self).__init__()

+        self.errmsg = "Expected end of text"

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if loc < len(instring):

+            #~ raise ParseException( instring, loc, "Expected end of text" )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+        elif loc == len(instring):

+            return loc+1, []

+        elif loc > len(instring):

+            return loc, []

+        else:

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+

+class WordStart(_PositionToken):

+    """Matches if the current position is at the beginning of a Word, and

+       is not preceded by any character in a given set of C{wordChars}

+       (default=C{printables}). To emulate the C{\b} behavior of regular expressions,

+       use C{WordStart(alphanums)}. C{WordStart} will also match at the beginning of

+       the string being parsed, or at the beginning of a line.

+    """

+    def __init__(self, wordChars = printables):

+        super(WordStart,self).__init__()

+        self.wordChars = set(wordChars)

+        self.errmsg = "Not at the start of a word"

+

+    def parseImpl(self, instring, loc, doActions=True ):

+        if loc != 0:

+            if (instring[loc-1] in self.wordChars or

+                instring[loc] not in self.wordChars):

+                exc = self.myException

+                exc.loc = loc

+                exc.pstr = instring

+                raise exc

+        return loc, []

+

+class WordEnd(_PositionToken):

+    """Matches if the current position is at the end of a Word, and

+       is not followed by any character in a given set of C{wordChars}

+       (default=C{printables}). To emulate the C{\b} behavior of regular expressions,

+       use C{WordEnd(alphanums)}. C{WordEnd} will also match at the end of

+       the string being parsed, or at the end of a line.

+    """

+    def __init__(self, wordChars = printables):

+        super(WordEnd,self).__init__()

+        self.wordChars = set(wordChars)

+        self.skipWhitespace = False

+        self.errmsg = "Not at the end of a word"

+

+    def parseImpl(self, instring, loc, doActions=True ):

+        instrlen = len(instring)

+        if instrlen>0 and loc<instrlen:

+            if (instring[loc] in self.wordChars or

+                instring[loc-1] not in self.wordChars):

+                #~ raise ParseException( instring, loc, "Expected end of word" )

+                exc = self.myException

+                exc.loc = loc

+                exc.pstr = instring

+                raise exc

+        return loc, []

+

+

+class ParseExpression(ParserElement):

+    """Abstract subclass of ParserElement, for combining and post-processing parsed tokens."""

+    def __init__( self, exprs, savelist = False ):

+        super(ParseExpression,self).__init__(savelist)

+        if isinstance( exprs, list ):

+            self.exprs = exprs

+        elif isinstance( exprs, basestring ):

+            self.exprs = [ Literal( exprs ) ]

+        else:

+            try:

+                self.exprs = list( exprs )

+            except TypeError:

+                self.exprs = [ exprs ]

+        self.callPreparse = False

+

+    def __getitem__( self, i ):

+        return self.exprs[i]

+

+    def append( self, other ):

+        self.exprs.append( other )

+        self.strRepr = None

+        return self

+

+    def leaveWhitespace( self ):

+        """Extends C{leaveWhitespace} defined in base class, and also invokes C{leaveWhitespace} on

+           all contained expressions."""

+        self.skipWhitespace = False

+        self.exprs = [ e.copy() for e in self.exprs ]

+        for e in self.exprs:

+            e.leaveWhitespace()

+        return self

+

+    def ignore( self, other ):

+        if isinstance( other, Suppress ):

+            if other not in self.ignoreExprs:

+                super( ParseExpression, self).ignore( other )

+                for e in self.exprs:

+                    e.ignore( self.ignoreExprs[-1] )

+        else:

+            super( ParseExpression, self).ignore( other )

+            for e in self.exprs:

+                e.ignore( self.ignoreExprs[-1] )

+        return self

+

+    def __str__( self ):

+        try:

+            return super(ParseExpression,self).__str__()

+        except:

+            pass

+

+        if self.strRepr is None:

+            self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) )

+        return self.strRepr

+

+    def streamline( self ):

+        super(ParseExpression,self).streamline()

+

+        for e in self.exprs:

+            e.streamline()

+

+        # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d )

+        # but only if there are no parse actions or resultsNames on the nested And's

+        # (likewise for Or's and MatchFirst's)

+        if ( len(self.exprs) == 2 ):

+            other = self.exprs[0]

+            if ( isinstance( other, self.__class__ ) and

+                  not(other.parseAction) and

+                  other.resultsName is None and

+                  not other.debug ):

+                self.exprs = other.exprs[:] + [ self.exprs[1] ]

+                self.strRepr = None

+                self.mayReturnEmpty |= other.mayReturnEmpty

+                self.mayIndexError  |= other.mayIndexError

+

+            other = self.exprs[-1]

+            if ( isinstance( other, self.__class__ ) and

+                  not(other.parseAction) and

+                  other.resultsName is None and

+                  not other.debug ):

+                self.exprs = self.exprs[:-1] + other.exprs[:]

+                self.strRepr = None

+                self.mayReturnEmpty |= other.mayReturnEmpty

+                self.mayIndexError  |= other.mayIndexError

+

+        return self

+

+    def setResultsName( self, name, listAllMatches=False ):

+        ret = super(ParseExpression,self).setResultsName(name,listAllMatches)

+        return ret

+

+    def validate( self, validateTrace=[] ):

+        tmp = validateTrace[:]+[self]

+        for e in self.exprs:

+            e.validate(tmp)

+        self.checkRecursion( [] )

+        

+    def copy(self):

+        ret = super(ParseExpression,self).copy()

+        ret.exprs = [e.copy() for e in self.exprs]

+        return ret

+

+class And(ParseExpression):

+    """Requires all given C{ParseExpression}s to be found in the given order.

+       Expressions may be separated by whitespace.

+       May be constructed using the C{'+'} operator.

+    """

+

+    class _ErrorStop(Empty):

+        def __init__(self, *args, **kwargs):

+            super(Empty,self).__init__(*args, **kwargs)

+            self.leaveWhitespace()

+

+    def __init__( self, exprs, savelist = True ):

+        super(And,self).__init__(exprs, savelist)

+        self.mayReturnEmpty = True

+        for e in self.exprs:

+            if not e.mayReturnEmpty:

+                self.mayReturnEmpty = False

+                break

+        self.setWhitespaceChars( exprs[0].whiteChars )

+        self.skipWhitespace = exprs[0].skipWhitespace

+        self.callPreparse = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        # pass False as last arg to _parse for first element, since we already

+        # pre-parsed the string as part of our And pre-parsing

+        loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False )

+        errorStop = False

+        for e in self.exprs[1:]:

+            if isinstance(e, And._ErrorStop):

+                errorStop = True

+                continue

+            if errorStop:

+                try:

+                    loc, exprtokens = e._parse( instring, loc, doActions )

+                except ParseSyntaxException:

+                    raise

+                except ParseBaseException:

+                    pe = sys.exc_info()[1]

+                    raise ParseSyntaxException(pe)

+                except IndexError:

+                    raise ParseSyntaxException( ParseException(instring, len(instring), self.errmsg, self) )

+            else:

+                loc, exprtokens = e._parse( instring, loc, doActions )

+            if exprtokens or exprtokens.keys():

+                resultlist += exprtokens

+        return loc, resultlist

+

+    def __iadd__(self, other ):

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        return self.append( other ) #And( [ self, other ] )

+

+    def checkRecursion( self, parseElementList ):

+        subRecCheckList = parseElementList[:] + [ self ]

+        for e in self.exprs:

+            e.checkRecursion( subRecCheckList )

+            if not e.mayReturnEmpty:

+                break

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "{" + " ".join( [ _ustr(e) for e in self.exprs ] ) + "}"

+

+        return self.strRepr

+

+

+class Or(ParseExpression):

+    """Requires that at least one C{ParseExpression} is found.

+       If two expressions match, the expression that matches the longest string will be used.

+       May be constructed using the C{'^'} operator.

+    """

+    def __init__( self, exprs, savelist = False ):

+        super(Or,self).__init__(exprs, savelist)

+        self.mayReturnEmpty = False

+        for e in self.exprs:

+            if e.mayReturnEmpty:

+                self.mayReturnEmpty = True

+                break

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        maxExcLoc = -1

+        maxMatchLoc = -1

+        maxException = None

+        for e in self.exprs:

+            try:

+                loc2 = e.tryParse( instring, loc )

+            except ParseException:

+                err = sys.exc_info()[1]

+                if err.loc > maxExcLoc:

+                    maxException = err

+                    maxExcLoc = err.loc

+            except IndexError:

+                if len(instring) > maxExcLoc:

+                    maxException = ParseException(instring,len(instring),e.errmsg,self)

+                    maxExcLoc = len(instring)

+            else:

+                if loc2 > maxMatchLoc:

+                    maxMatchLoc = loc2

+                    maxMatchExp = e

+

+        if maxMatchLoc < 0:

+            if maxException is not None:

+                raise maxException

+            else:

+                raise ParseException(instring, loc, "no defined alternatives to match", self)

+

+        return maxMatchExp._parse( instring, loc, doActions )

+

+    def __ixor__(self, other ):

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        return self.append( other ) #Or( [ self, other ] )

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "{" + " ^ ".join( [ _ustr(e) for e in self.exprs ] ) + "}"

+

+        return self.strRepr

+

+    def checkRecursion( self, parseElementList ):

+        subRecCheckList = parseElementList[:] + [ self ]

+        for e in self.exprs:

+            e.checkRecursion( subRecCheckList )

+

+

+class MatchFirst(ParseExpression):

+    """Requires that at least one C{ParseExpression} is found.

+       If two expressions match, the first one listed is the one that will match.

+       May be constructed using the C{'|'} operator.

+    """

+    def __init__( self, exprs, savelist = False ):

+        super(MatchFirst,self).__init__(exprs, savelist)

+        if exprs:

+            self.mayReturnEmpty = False

+            for e in self.exprs:

+                if e.mayReturnEmpty:

+                    self.mayReturnEmpty = True

+                    break

+        else:

+            self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        maxExcLoc = -1

+        maxException = None

+        for e in self.exprs:

+            try:

+                ret = e._parse( instring, loc, doActions )

+                return ret

+            except ParseException, err:

+                if err.loc > maxExcLoc:

+                    maxException = err

+                    maxExcLoc = err.loc

+            except IndexError:

+                if len(instring) > maxExcLoc:

+                    maxException = ParseException(instring,len(instring),e.errmsg,self)

+                    maxExcLoc = len(instring)

+

+        # only got here if no expression matched, raise exception for match that made it the furthest

+        else:

+            if maxException is not None:

+                raise maxException

+            else:

+                raise ParseException(instring, loc, "no defined alternatives to match", self)

+

+    def __ior__(self, other ):

+        if isinstance( other, basestring ):

+            other = Literal( other )

+        return self.append( other ) #MatchFirst( [ self, other ] )

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "{" + " | ".join( [ _ustr(e) for e in self.exprs ] ) + "}"

+

+        return self.strRepr

+

+    def checkRecursion( self, parseElementList ):

+        subRecCheckList = parseElementList[:] + [ self ]

+        for e in self.exprs:

+            e.checkRecursion( subRecCheckList )

+

+

+class Each(ParseExpression):

+    """Requires all given C{ParseExpression}s to be found, but in any order.

+       Expressions may be separated by whitespace.

+       May be constructed using the C{'&'} operator.

+    """

+    def __init__( self, exprs, savelist = True ):

+        super(Each,self).__init__(exprs, savelist)

+        self.mayReturnEmpty = True

+        for e in self.exprs:

+            if not e.mayReturnEmpty:

+                self.mayReturnEmpty = False

+                break

+        self.skipWhitespace = True

+        self.initExprGroups = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if self.initExprGroups:

+            opt1 = [ e.expr for e in self.exprs if isinstance(e,Optional) ]

+            opt2 = [ e for e in self.exprs if e.mayReturnEmpty and e not in opt1 ]

+            self.optionals = opt1 + opt2

+            self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ]

+            self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ]

+            self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ]

+            self.required += self.multirequired

+            self.initExprGroups = False

+        tmpLoc = loc

+        tmpReqd = self.required[:]

+        tmpOpt  = self.optionals[:]

+        matchOrder = []

+

+        keepMatching = True

+        while keepMatching:

+            tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired

+            failed = []

+            for e in tmpExprs:

+                try:

+                    tmpLoc = e.tryParse( instring, tmpLoc )

+                except ParseException:

+                    failed.append(e)

+                else:

+                    matchOrder.append(e)

+                    if e in tmpReqd:

+                        tmpReqd.remove(e)

+                    elif e in tmpOpt:

+                        tmpOpt.remove(e)

+            if len(failed) == len(tmpExprs):

+                keepMatching = False

+

+        if tmpReqd:

+            missing = ", ".join( [ _ustr(e) for e in tmpReqd ] )

+            raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing )

+

+        # add any unmatched Optionals, in case they have default values defined

+        matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt]

+

+        resultlist = []

+        for e in matchOrder:

+            loc,results = e._parse(instring,loc,doActions)

+            resultlist.append(results)

+

+        finalResults = ParseResults([])

+        for r in resultlist:

+            dups = {}

+            for k in r.keys():

+                if k in finalResults.keys():

+                    tmp = ParseResults(finalResults[k])

+                    tmp += ParseResults(r[k])

+                    dups[k] = tmp

+            finalResults += ParseResults(r)

+            for k,v in dups.items():

+                finalResults[k] = v

+        return loc, finalResults

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "{" + " & ".join( [ _ustr(e) for e in self.exprs ] ) + "}"

+

+        return self.strRepr

+

+    def checkRecursion( self, parseElementList ):

+        subRecCheckList = parseElementList[:] + [ self ]

+        for e in self.exprs:

+            e.checkRecursion( subRecCheckList )

+

+

+class ParseElementEnhance(ParserElement):

+    """Abstract subclass of C{ParserElement}, for combining and post-processing parsed tokens."""

+    def __init__( self, expr, savelist=False ):

+        super(ParseElementEnhance,self).__init__(savelist)

+        if isinstance( expr, basestring ):

+            expr = Literal(expr)

+        self.expr = expr

+        self.strRepr = None

+        if expr is not None:

+            self.mayIndexError = expr.mayIndexError

+            self.mayReturnEmpty = expr.mayReturnEmpty

+            self.setWhitespaceChars( expr.whiteChars )

+            self.skipWhitespace = expr.skipWhitespace

+            self.saveAsList = expr.saveAsList

+            self.callPreparse = expr.callPreparse

+            self.ignoreExprs.extend(expr.ignoreExprs)

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        if self.expr is not None:

+            return self.expr._parse( instring, loc, doActions, callPreParse=False )

+        else:

+            raise ParseException("",loc,self.errmsg,self)

+

+    def leaveWhitespace( self ):

+        self.skipWhitespace = False

+        self.expr = self.expr.copy()

+        if self.expr is not None:

+            self.expr.leaveWhitespace()

+        return self

+

+    def ignore( self, other ):

+        if isinstance( other, Suppress ):

+            if other not in self.ignoreExprs:

+                super( ParseElementEnhance, self).ignore( other )

+                if self.expr is not None:

+                    self.expr.ignore( self.ignoreExprs[-1] )

+        else:

+            super( ParseElementEnhance, self).ignore( other )

+            if self.expr is not None:

+                self.expr.ignore( self.ignoreExprs[-1] )

+        return self

+

+    def streamline( self ):

+        super(ParseElementEnhance,self).streamline()

+        if self.expr is not None:

+            self.expr.streamline()

+        return self

+

+    def checkRecursion( self, parseElementList ):

+        if self in parseElementList:

+            raise RecursiveGrammarException( parseElementList+[self] )

+        subRecCheckList = parseElementList[:] + [ self ]

+        if self.expr is not None:

+            self.expr.checkRecursion( subRecCheckList )

+

+    def validate( self, validateTrace=[] ):

+        tmp = validateTrace[:]+[self]

+        if self.expr is not None:

+            self.expr.validate(tmp)

+        self.checkRecursion( [] )

+

+    def __str__( self ):

+        try:

+            return super(ParseElementEnhance,self).__str__()

+        except:

+            pass

+

+        if self.strRepr is None and self.expr is not None:

+            self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) )

+        return self.strRepr

+

+

+class FollowedBy(ParseElementEnhance):

+    """Lookahead matching of the given parse expression.  C{FollowedBy}

+    does *not* advance the parsing position within the input string, it only

+    verifies that the specified parse expression matches at the current

+    position.  C{FollowedBy} always returns a null token list."""

+    def __init__( self, expr ):

+        super(FollowedBy,self).__init__(expr)

+        self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        self.expr.tryParse( instring, loc )

+        return loc, []

+

+

+class NotAny(ParseElementEnhance):

+    """Lookahead to disallow matching with the given parse expression.  C{NotAny}

+    does *not* advance the parsing position within the input string, it only

+    verifies that the specified parse expression does *not* match at the current

+    position.  Also, C{NotAny} does *not* skip over leading whitespace. C{NotAny}

+    always returns a null token list.  May be constructed using the '~' operator."""

+    def __init__( self, expr ):

+        super(NotAny,self).__init__(expr)

+        #~ self.leaveWhitespace()

+        self.skipWhitespace = False  # do NOT use self.leaveWhitespace(), don't want to propagate to exprs

+        self.mayReturnEmpty = True

+        self.errmsg = "Found unwanted token, "+_ustr(self.expr)

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        try:

+            self.expr.tryParse( instring, loc )

+        except (ParseException,IndexError):

+            pass

+        else:

+            #~ raise ParseException(instring, loc, self.errmsg )

+            exc = self.myException

+            exc.loc = loc

+            exc.pstr = instring

+            raise exc

+        return loc, []

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "~{" + _ustr(self.expr) + "}"

+

+        return self.strRepr

+

+

+class ZeroOrMore(ParseElementEnhance):

+    """Optional repetition of zero or more of the given expression."""

+    def __init__( self, expr ):

+        super(ZeroOrMore,self).__init__(expr)

+        self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        tokens = []

+        try:

+            loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )

+            hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )

+            while 1:

+                if hasIgnoreExprs:

+                    preloc = self._skipIgnorables( instring, loc )

+                else:

+                    preloc = loc

+                loc, tmptokens = self.expr._parse( instring, preloc, doActions )

+                if tmptokens or tmptokens.keys():

+                    tokens += tmptokens

+        except (ParseException,IndexError):

+            pass

+

+        return loc, tokens

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "[" + _ustr(self.expr) + "]..."

+

+        return self.strRepr

+

+    def setResultsName( self, name, listAllMatches=False ):

+        ret = super(ZeroOrMore,self).setResultsName(name,listAllMatches)

+        ret.saveAsList = True

+        return ret

+

+

+class OneOrMore(ParseElementEnhance):

+    """Repetition of one or more of the given expression."""

+    def parseImpl( self, instring, loc, doActions=True ):

+        # must be at least one

+        loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )

+        try:

+            hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )

+            while 1:

+                if hasIgnoreExprs:

+                    preloc = self._skipIgnorables( instring, loc )

+                else:

+                    preloc = loc

+                loc, tmptokens = self.expr._parse( instring, preloc, doActions )

+                if tmptokens or tmptokens.keys():

+                    tokens += tmptokens

+        except (ParseException,IndexError):

+            pass

+

+        return loc, tokens

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "{" + _ustr(self.expr) + "}..."

+

+        return self.strRepr

+

+    def setResultsName( self, name, listAllMatches=False ):

+        ret = super(OneOrMore,self).setResultsName(name,listAllMatches)

+        ret.saveAsList = True

+        return ret

+

+class _NullToken(object):

+    def __bool__(self):

+        return False

+    __nonzero__ = __bool__

+    def __str__(self):

+        return ""

+

+_optionalNotMatched = _NullToken()

+class Optional(ParseElementEnhance):

+    """Optional matching of the given expression.

+       A default return string can also be specified, if the optional expression

+       is not found.

+    """

+    def __init__( self, exprs, default=_optionalNotMatched ):

+        super(Optional,self).__init__( exprs, savelist=False )

+        self.defaultValue = default

+        self.mayReturnEmpty = True

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        try:

+            loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )

+        except (ParseException,IndexError):

+            if self.defaultValue is not _optionalNotMatched:

+                if self.expr.resultsName:

+                    tokens = ParseResults([ self.defaultValue ])

+                    tokens[self.expr.resultsName] = self.defaultValue

+                else:

+                    tokens = [ self.defaultValue ]

+            else:

+                tokens = []

+        return loc, tokens

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        if self.strRepr is None:

+            self.strRepr = "[" + _ustr(self.expr) + "]"

+

+        return self.strRepr

+

+

+class SkipTo(ParseElementEnhance):

+    """Token for skipping over all undefined text until the matched expression is found.

+       If C{include} is set to true, the matched expression is also parsed (the skipped text

+       and matched expression are returned as a 2-element list).  The C{ignore}

+       argument is used to define grammars (typically quoted strings and comments) that

+       might contain false matches.

+    """

+    def __init__( self, other, include=False, ignore=None, failOn=None ):

+        super( SkipTo, self ).__init__( other )

+        self.ignoreExpr = ignore

+        self.mayReturnEmpty = True

+        self.mayIndexError = False

+        self.includeMatch = include

+        self.asList = False

+        if failOn is not None and isinstance(failOn, basestring):

+            self.failOn = Literal(failOn)

+        else:

+            self.failOn = failOn

+        self.errmsg = "No match found for "+_ustr(self.expr)

+

+    def parseImpl( self, instring, loc, doActions=True ):

+        startLoc = loc

+        instrlen = len(instring)

+        expr = self.expr

+        failParse = False

+        while loc <= instrlen:

+            try:

+                if self.failOn:

+                    try:

+                        self.failOn.tryParse(instring, loc)

+                    except ParseBaseException:

+                        pass

+                    else:

+                        failParse = True

+                        raise ParseException(instring, loc, "Found expression " + str(self.failOn))

+                    failParse = False

+                if self.ignoreExpr is not None:

+                    while 1:

+                        try:

+                            loc = self.ignoreExpr.tryParse(instring,loc)

+                            # print "found ignoreExpr, advance to", loc

+                        except ParseBaseException:

+                            break

+                expr._parse( instring, loc, doActions=False, callPreParse=False )

+                skipText = instring[startLoc:loc]

+                if self.includeMatch:

+                    loc,mat = expr._parse(instring,loc,doActions,callPreParse=False)

+                    if mat:

+                        skipRes = ParseResults( skipText )

+                        skipRes += mat

+                        return loc, [ skipRes ]

+                    else:

+                        return loc, [ skipText ]

+                else:

+                    return loc, [ skipText ]

+            except (ParseException,IndexError):

+                if failParse:

+                    raise

+                else:

+                    loc += 1

+        exc = self.myException

+        exc.loc = loc

+        exc.pstr = instring

+        raise exc

+

+class Forward(ParseElementEnhance):

+    """Forward declaration of an expression to be defined later -

+       used for recursive grammars, such as algebraic infix notation.

+       When the expression is known, it is assigned to the C{Forward} variable using the '<<' operator.

+

+       Note: take care when assigning to C{Forward} not to overlook precedence of operators.

+       Specifically, '|' has a lower precedence than '<<', so that::

+          fwdExpr << a | b | c

+       will actually be evaluated as::

+          (fwdExpr << a) | b | c

+       thereby leaving b and c out as parseable alternatives.  It is recommended that you

+       explicitly group the values inserted into the C{Forward}::

+          fwdExpr << (a | b | c)

+    """

+    def __init__( self, other=None ):

+        super(Forward,self).__init__( other, savelist=False )

+

+    def __lshift__( self, other ):

+        if isinstance( other, basestring ):

+            other = Literal(other)

+        self.expr = other

+        self.mayReturnEmpty = other.mayReturnEmpty

+        self.strRepr = None

+        self.mayIndexError = self.expr.mayIndexError

+        self.mayReturnEmpty = self.expr.mayReturnEmpty

+        self.setWhitespaceChars( self.expr.whiteChars )

+        self.skipWhitespace = self.expr.skipWhitespace

+        self.saveAsList = self.expr.saveAsList

+        self.ignoreExprs.extend(self.expr.ignoreExprs)

+        return None

+

+    def leaveWhitespace( self ):

+        self.skipWhitespace = False

+        return self

+

+    def streamline( self ):

+        if not self.streamlined:

+            self.streamlined = True

+            if self.expr is not None:

+                self.expr.streamline()

+        return self

+

+    def validate( self, validateTrace=[] ):

+        if self not in validateTrace:

+            tmp = validateTrace[:]+[self]

+            if self.expr is not None:

+                self.expr.validate(tmp)

+        self.checkRecursion([])

+

+    def __str__( self ):

+        if hasattr(self,"name"):

+            return self.name

+

+        self._revertClass = self.__class__

+        self.__class__ = _ForwardNoRecurse

+        try:

+            if self.expr is not None:

+                retString = _ustr(self.expr)

+            else:

+                retString = "None"

+        finally:

+            self.__class__ = self._revertClass

+        return self.__class__.__name__ + ": " + retString

+

+    def copy(self):

+        if self.expr is not None:

+            return super(Forward,self).copy()

+        else:

+            ret = Forward()

+            ret << self

+            return ret

+

+class _ForwardNoRecurse(Forward):

+    def __str__( self ):

+        return "..."

+

+class TokenConverter(ParseElementEnhance):

+    """Abstract subclass of C{ParseExpression}, for converting parsed results."""

+    def __init__( self, expr, savelist=False ):

+        super(TokenConverter,self).__init__( expr )#, savelist )

+        self.saveAsList = False

+

+class Upcase(TokenConverter):

+    """Converter to upper case all matching tokens."""

+    def __init__(self, *args):

+        super(Upcase,self).__init__(*args)

+        warnings.warn("Upcase class is deprecated, use upcaseTokens parse action instead",

+                       DeprecationWarning,stacklevel=2)

+

+    def postParse( self, instring, loc, tokenlist ):

+        return list(map( string.upper, tokenlist ))

+

+

+class Combine(TokenConverter):

+    """Converter to concatenate all matching tokens to a single string.

+       By default, the matching patterns must also be contiguous in the input string;

+       this can be disabled by specifying C{'adjacent=False'} in the constructor.

+    """

+    def __init__( self, expr, joinString="", adjacent=True ):

+        super(Combine,self).__init__( expr )

+        # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself

+        if adjacent:

+            self.leaveWhitespace()

+        self.adjacent = adjacent

+        self.skipWhitespace = True

+        self.joinString = joinString

+        self.callPreparse = True

+

+    def ignore( self, other ):

+        if self.adjacent:

+            ParserElement.ignore(self, other)

+        else:

+            super( Combine, self).ignore( other )

+        return self

+

+    def postParse( self, instring, loc, tokenlist ):

+        retToks = tokenlist.copy()

+        del retToks[:]

+        retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults)

+

+        if self.resultsName and len(retToks.keys())>0:

+            return [ retToks ]

+        else:

+            return retToks

+

+class Group(TokenConverter):

+    """Converter to return the matched tokens as a list - useful for returning tokens of C{ZeroOrMore} and C{OneOrMore} expressions."""

+    def __init__( self, expr ):

+        super(Group,self).__init__( expr )

+        self.saveAsList = True

+

+    def postParse( self, instring, loc, tokenlist ):

+        return [ tokenlist ]

+

+class Dict(TokenConverter):

+    """Converter to return a repetitive expression as a list, but also as a dictionary.

+       Each element can also be referenced using the first token in the expression as its key.

+       Useful for tabular report scraping when the first column can be used as a item key.

+    """

+    def __init__( self, exprs ):

+        super(Dict,self).__init__( exprs )

+        self.saveAsList = True

+

+    def postParse( self, instring, loc, tokenlist ):

+        for i,tok in enumerate(tokenlist):

+            if len(tok) == 0:

+                continue

+            ikey = tok[0]

+            if isinstance(ikey,int):

+                ikey = _ustr(tok[0]).strip()

+            if len(tok)==1:

+                tokenlist[ikey] = _ParseResultsWithOffset("",i)

+            elif len(tok)==2 and not isinstance(tok[1],ParseResults):

+                tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i)

+            else:

+                dictvalue = tok.copy() #ParseResults(i)

+                del dictvalue[0]

+                if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.keys()):

+                    tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i)

+                else:

+                    tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i)

+

+        if self.resultsName:

+            return [ tokenlist ]

+        else:

+            return tokenlist

+

+

+class Suppress(TokenConverter):

+    """Converter for ignoring the results of a parsed expression."""

+    def postParse( self, instring, loc, tokenlist ):

+        return []

+

+    def suppress( self ):

+        return self

+

+

+class OnlyOnce(object):

+    """Wrapper for parse actions, to ensure they are only called once."""

+    def __init__(self, methodCall):

+        self.callable = _trim_arity(methodCall)

+        self.called = False

+    def __call__(self,s,l,t):

+        if not self.called:

+            results = self.callable(s,l,t)

+            self.called = True

+            return results

+        raise ParseException(s,l,"")

+    def reset(self):

+        self.called = False

+

+def traceParseAction(f):

+    """Decorator for debugging parse actions."""

+    f = _trim_arity(f)

+    def z(*paArgs):

+        thisFunc = f.func_name

+        s,l,t = paArgs[-3:]

+        if len(paArgs)>3:

+            thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc

+        sys.stderr.write( ">>entering %s(line: '%s', %d, %s)\n" % (thisFunc,line(l,s),l,t) )

+        try:

+            ret = f(*paArgs)

+        except Exception:

+            exc = sys.exc_info()[1]

+            sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) )

+            raise

+        sys.stderr.write( "<<leaving %s (ret: %s)\n" % (thisFunc,ret) )

+        return ret

+    try:

+        z.__name__ = f.__name__

+    except AttributeError:

+        pass

+    return z

+

+#

+# global helpers

+#

+def delimitedList( expr, delim=",", combine=False ):

+    """Helper to define a delimited list of expressions - the delimiter defaults to ','.

+       By default, the list elements and delimiters can have intervening whitespace, and

+       comments, but this can be overridden by passing C{combine=True} in the constructor.

+       If C{combine} is set to True, the matching tokens are returned as a single token

+       string, with the delimiters included; otherwise, the matching tokens are returned

+       as a list of tokens, with the delimiters suppressed.

+    """

+    dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..."

+    if combine:

+        return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName)

+    else:

+        return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName)

+

+def countedArray( expr, intExpr=None ):

+    """Helper to define a counted list of expressions.

+       This helper defines a pattern of the form::

+           integer expr expr expr...

+       where the leading integer tells how many expr expressions follow.

+       The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed.

+    """

+    arrayExpr = Forward()

+    def countFieldParseAction(s,l,t):

+        n = t[0]

+        arrayExpr << (n and Group(And([expr]*n)) or Group(empty))

+        return []

+    if intExpr is None:

+        intExpr = Word(nums).setParseAction(lambda t:int(t[0]))

+    else:

+        intExpr = intExpr.copy()

+    intExpr.setName("arrayLen")

+    intExpr.addParseAction(countFieldParseAction, callDuringTry=True)

+    return ( intExpr + arrayExpr )

+

+def _flatten(L):

+    ret = []

+    for i in L:

+        if isinstance(i,list):

+            ret.extend(_flatten(i))

+        else:

+            ret.append(i)

+    return ret

+

+def matchPreviousLiteral(expr):

+    """Helper to define an expression that is indirectly defined from

+       the tokens matched in a previous expression, that is, it looks

+       for a 'repeat' of a previous expression.  For example::

+           first = Word(nums)

+           second = matchPreviousLiteral(first)

+           matchExpr = first + ":" + second

+       will match C{"1:1"}, but not C{"1:2"}.  Because this matches a

+       previous literal, will also match the leading C{"1:1"} in C{"1:10"}.

+       If this is not desired, use C{matchPreviousExpr}.

+       Do *not* use with packrat parsing enabled.

+    """

+    rep = Forward()

+    def copyTokenToRepeater(s,l,t):

+        if t:

+            if len(t) == 1:

+                rep << t[0]

+            else:

+                # flatten t tokens

+                tflat = _flatten(t.asList())

+                rep << And( [ Literal(tt) for tt in tflat ] )

+        else:

+            rep << Empty()

+    expr.addParseAction(copyTokenToRepeater, callDuringTry=True)

+    return rep

+

+def matchPreviousExpr(expr):

+    """Helper to define an expression that is indirectly defined from

+       the tokens matched in a previous expression, that is, it looks

+       for a 'repeat' of a previous expression.  For example::

+           first = Word(nums)

+           second = matchPreviousExpr(first)

+           matchExpr = first + ":" + second

+       will match C{"1:1"}, but not C{"1:2"}.  Because this matches by

+       expressions, will *not* match the leading C{"1:1"} in C{"1:10"};

+       the expressions are evaluated first, and then compared, so

+       C{"1"} is compared with C{"10"}.

+       Do *not* use with packrat parsing enabled.

+    """

+    rep = Forward()

+    e2 = expr.copy()

+    rep << e2

+    def copyTokenToRepeater(s,l,t):

+        matchTokens = _flatten(t.asList())

+        def mustMatchTheseTokens(s,l,t):

+            theseTokens = _flatten(t.asList())

+            if  theseTokens != matchTokens:

+                raise ParseException("",0,"")

+        rep.setParseAction( mustMatchTheseTokens, callDuringTry=True )

+    expr.addParseAction(copyTokenToRepeater, callDuringTry=True)

+    return rep

+

+def _escapeRegexRangeChars(s):

+    #~  escape these chars: ^-]

+    for c in r"\^-]":

+        s = s.replace(c,_bslash+c)

+    s = s.replace("\n",r"\n")

+    s = s.replace("\t",r"\t")

+    return _ustr(s)

+

+def oneOf( strs, caseless=False, useRegex=True ):

+    """Helper to quickly define a set of alternative Literals, and makes sure to do

+       longest-first testing when there is a conflict, regardless of the input order,

+       but returns a C{MatchFirst} for best performance.

+

+       Parameters:

+        - strs - a string of space-delimited literals, or a list of string literals

+        - caseless - (default=False) - treat all literals as caseless

+        - useRegex - (default=True) - as an optimization, will generate a Regex

+          object; otherwise, will generate a C{MatchFirst} object (if C{caseless=True}, or

+          if creating a C{Regex} raises an exception)

+    """

+    if caseless:

+        isequal = ( lambda a,b: a.upper() == b.upper() )

+        masks = ( lambda a,b: b.upper().startswith(a.upper()) )

+        parseElementClass = CaselessLiteral

+    else:

+        isequal = ( lambda a,b: a == b )

+        masks = ( lambda a,b: b.startswith(a) )

+        parseElementClass = Literal

+

+    if isinstance(strs,(list,tuple)):

+        symbols = list(strs[:])

+    elif isinstance(strs,basestring):

+        symbols = strs.split()

+    else:

+        warnings.warn("Invalid argument to oneOf, expected string or list",

+                SyntaxWarning, stacklevel=2)

+

+    i = 0

+    while i < len(symbols)-1:

+        cur = symbols[i]

+        for j,other in enumerate(symbols[i+1:]):

+            if ( isequal(other, cur) ):

+                del symbols[i+j+1]

+                break

+            elif ( masks(cur, other) ):

+                del symbols[i+j+1]

+                symbols.insert(i,other)

+                cur = other

+                break

+        else:

+            i += 1

+

+    if not caseless and useRegex:

+        #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] ))

+        try:

+            if len(symbols)==len("".join(symbols)):

+                return Regex( "[%s]" % "".join( [ _escapeRegexRangeChars(sym) for sym in symbols] ) )

+            else:

+                return Regex( "|".join( [ re.escape(sym) for sym in symbols] ) )

+        except:

+            warnings.warn("Exception creating Regex for oneOf, building MatchFirst",

+                    SyntaxWarning, stacklevel=2)

+

+

+    # last resort, just use MatchFirst

+    return MatchFirst( [ parseElementClass(sym) for sym in symbols ] )

+

+def dictOf( key, value ):

+    """Helper to easily and clearly define a dictionary by specifying the respective patterns

+       for the key and value.  Takes care of defining the C{Dict}, C{ZeroOrMore}, and C{Group} tokens

+       in the proper order.  The key pattern can include delimiting markers or punctuation,

+       as long as they are suppressed, thereby leaving the significant key text.  The value

+       pattern can include named results, so that the C{Dict} results can include named token

+       fields.

+    """

+    return Dict( ZeroOrMore( Group ( key + value ) ) )

+

+def originalTextFor(expr, asString=True):

+    """Helper to return the original, untokenized text for a given expression.  Useful to

+       restore the parsed fields of an HTML start tag into the raw tag text itself, or to

+       revert separate tokens with intervening whitespace back to the original matching

+       input text. Simpler to use than the parse action C{L{keepOriginalText}}, and does not

+       require the inspect module to chase up the call stack.  By default, returns a 

+       string containing the original parsed text.  

+       

+       If the optional C{asString} argument is passed as C{False}, then the return value is a 

+       C{ParseResults} containing any results names that were originally matched, and a 

+       single token containing the original matched text from the input string.  So if 

+       the expression passed to C{L{originalTextFor}} contains expressions with defined

+       results names, you must set C{asString} to C{False} if you want to preserve those

+       results name values."""

+    locMarker = Empty().setParseAction(lambda s,loc,t: loc)

+    endlocMarker = locMarker.copy()

+    endlocMarker.callPreparse = False

+    matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end")

+    if asString:

+        extractText = lambda s,l,t: s[t._original_start:t._original_end]

+    else:

+        def extractText(s,l,t):

+            del t[:]

+            t.insert(0, s[t._original_start:t._original_end])

+            del t["_original_start"]

+            del t["_original_end"]

+    matchExpr.setParseAction(extractText)

+    return matchExpr

+

+def ungroup(expr): 

+    """Helper to undo pyparsing's default grouping of And expressions, even

+       if all but one are non-empty."""

+    return TokenConverter(expr).setParseAction(lambda t:t[0])

+

+# convenience constants for positional expressions

+empty       = Empty().setName("empty")

+lineStart   = LineStart().setName("lineStart")

+lineEnd     = LineEnd().setName("lineEnd")

+stringStart = StringStart().setName("stringStart")

+stringEnd   = StringEnd().setName("stringEnd")

+

+_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1])

+_printables_less_backslash = "".join([ c for c in printables if c not in  r"\]" ])

+_escapedHexChar = Regex(r"\\0?[xX][0-9a-fA-F]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],16)))

+_escapedOctChar = Regex(r"\\0[0-7]+").setParseAction(lambda s,l,t:unichr(int(t[0][1:],8)))

+_singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | Word(_printables_less_backslash,exact=1)

+_charRange = Group(_singleChar + Suppress("-") + _singleChar)

+_reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]"

+

+_expanded = lambda p: (isinstance(p,ParseResults) and ''.join([ unichr(c) for c in range(ord(p[0]),ord(p[1])+1) ]) or p)

+

+def srange(s):

+    r"""Helper to easily define string ranges for use in Word construction.  Borrows

+       syntax from regexp '[]' string range definitions::

+          srange("[0-9]")   -> "0123456789"

+          srange("[a-z]")   -> "abcdefghijklmnopqrstuvwxyz"

+          srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_"

+       The input string must be enclosed in []'s, and the returned string is the expanded

+       character set joined into a single string.

+       The values enclosed in the []'s may be::

+          a single character

+          an escaped character with a leading backslash (such as \- or \])

+          an escaped hex character with a leading '\x' (\x21, which is a '!' character) 

+            (\0x## is also supported for backwards compatibility) 

+          an escaped octal character with a leading '\0' (\041, which is a '!' character)

+          a range of any of the above, separated by a dash ('a-z', etc.)

+          any combination of the above ('aeiouy', 'a-zA-Z0-9_$', etc.)

+    """

+    try:

+        return "".join([_expanded(part) for part in _reBracketExpr.parseString(s).body])

+    except:

+        return ""

+

+def matchOnlyAtCol(n):

+    """Helper method for defining parse actions that require matching at a specific

+       column in the input text.

+    """

+    def verifyCol(strg,locn,toks):

+        if col(locn,strg) != n:

+            raise ParseException(strg,locn,"matched token not at column %d" % n)

+    return verifyCol

+

+def replaceWith(replStr):

+    """Helper method for common parse actions that simply return a literal value.  Especially

+       useful when used with C{transformString()}.

+    """

+    def _replFunc(*args):

+        return [replStr]

+    return _replFunc

+

+def removeQuotes(s,l,t):

+    """Helper parse action for removing quotation marks from parsed quoted strings.

+       To use, add this parse action to quoted string using::

+         quotedString.setParseAction( removeQuotes )

+    """

+    return t[0][1:-1]

+

+def upcaseTokens(s,l,t):

+    """Helper parse action to convert tokens to upper case."""

+    return [ tt.upper() for tt in map(_ustr,t) ]

+

+def downcaseTokens(s,l,t):

+    """Helper parse action to convert tokens to lower case."""

+    return [ tt.lower() for tt in map(_ustr,t) ]

+

+def keepOriginalText(s,startLoc,t):

+    """DEPRECATED - use new helper method C{originalTextFor}.

+       Helper parse action to preserve original parsed text,

+       overriding any nested parse actions."""

+    try:

+        endloc = getTokensEndLoc()

+    except ParseException:

+        raise ParseFatalException("incorrect usage of keepOriginalText - may only be called as a parse action")

+    del t[:]

+    t += ParseResults(s[startLoc:endloc])

+    return t

+

+def getTokensEndLoc():

+    """Method to be called from within a parse action to determine the end

+       location of the parsed tokens."""

+    import inspect

+    fstack = inspect.stack()

+    try:

+        # search up the stack (through intervening argument normalizers) for correct calling routine

+        for f in fstack[2:]:

+            if f[3] == "_parseNoCache":

+                endloc = f[0].f_locals["loc"]

+                return endloc

+        else:

+            raise ParseFatalException("incorrect usage of getTokensEndLoc - may only be called from within a parse action")

+    finally:

+        del fstack

+

+def _makeTags(tagStr, xml):

+    """Internal helper to construct opening and closing tag expressions, given a tag name"""

+    if isinstance(tagStr,basestring):

+        resname = tagStr

+        tagStr = Keyword(tagStr, caseless=not xml)

+    else:

+        resname = tagStr.name

+

+    tagAttrName = Word(alphas,alphanums+"_-:")

+    if (xml):

+        tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes )

+        openTag = Suppress("<") + tagStr("tag") + \

+                Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \

+                Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")

+    else:

+        printablesLessRAbrack = "".join( [ c for c in printables if c not in ">" ] )

+        tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack)

+        openTag = Suppress("<") + tagStr("tag") + \

+                Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \

+                Optional( Suppress("=") + tagAttrValue ) ))) + \

+                Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")

+    closeTag = Combine(_L("</") + tagStr + ">")

+

+    openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % tagStr)

+    closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % tagStr)

+    openTag.tag = resname

+    closeTag.tag = resname

+    return openTag, closeTag

+

+def makeHTMLTags(tagStr):

+    """Helper to construct opening and closing tag expressions for HTML, given a tag name"""

+    return _makeTags( tagStr, False )

+

+def makeXMLTags(tagStr):

+    """Helper to construct opening and closing tag expressions for XML, given a tag name"""

+    return _makeTags( tagStr, True )

+

+def withAttribute(*args,**attrDict):

+    """Helper to create a validating parse action to be used with start tags created

+       with C{makeXMLTags} or C{makeHTMLTags}. Use C{withAttribute} to qualify a starting tag

+       with a required attribute value, to avoid false matches on common tags such as

+       C{<TD>} or C{<DIV>}.

+

+       Call C{withAttribute} with a series of attribute names and values. Specify the list

+       of filter attributes names and values as:

+        - keyword arguments, as in C{(align="right")}, or

+        - as an explicit dict with C{**} operator, when an attribute name is also a Python

+          reserved word, as in C{**{"class":"Customer", "align":"right"}}

+        - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") )

+       For attribute names with a namespace prefix, you must use the second form.  Attribute

+       names are matched insensitive to upper/lower case.

+

+       To verify that the attribute exists, but without specifying a value, pass

+       C{withAttribute.ANY_VALUE} as the value.

+       """

+    if args:

+        attrs = args[:]

+    else:

+        attrs = attrDict.items()

+    attrs = [(k,v) for k,v in attrs]

+    def pa(s,l,tokens):

+        for attrName,attrValue in attrs:

+            if attrName not in tokens:

+                raise ParseException(s,l,"no matching attribute " + attrName)

+            if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue:

+                raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" %

+                                            (attrName, tokens[attrName], attrValue))

+    return pa

+withAttribute.ANY_VALUE = object()

+

+opAssoc = _Constants()

+opAssoc.LEFT = object()

+opAssoc.RIGHT = object()

+

+def operatorPrecedence( baseExpr, opList ):

+    """Helper method for constructing grammars of expressions made up of

+       operators working in a precedence hierarchy.  Operators may be unary or

+       binary, left- or right-associative.  Parse actions can also be attached

+       to operator expressions.

+

+       Parameters:

+        - baseExpr - expression representing the most basic element for the nested

+        - opList - list of tuples, one for each operator precedence level in the

+          expression grammar; each tuple is of the form

+          (opExpr, numTerms, rightLeftAssoc, parseAction), where:

+           - opExpr is the pyparsing expression for the operator;

+              may also be a string, which will be converted to a Literal;

+              if numTerms is 3, opExpr is a tuple of two expressions, for the

+              two operators separating the 3 terms

+           - numTerms is the number of terms for this operator (must

+              be 1, 2, or 3)

+           - rightLeftAssoc is the indicator whether the operator is

+              right or left associative, using the pyparsing-defined

+              constants opAssoc.RIGHT and opAssoc.LEFT.

+           - parseAction is the parse action to be associated with

+              expressions matching this operator expression (the

+              parse action tuple member may be omitted)

+    """

+    ret = Forward()

+    lastExpr = baseExpr | ( Suppress('(') + ret + Suppress(')') )

+    for i,operDef in enumerate(opList):

+        opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4]

+        if arity == 3:

+            if opExpr is None or len(opExpr) != 2:

+                raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions")

+            opExpr1, opExpr2 = opExpr

+        thisExpr = Forward()#.setName("expr%d" % i)

+        if rightLeftAssoc == opAssoc.LEFT:

+            if arity == 1:

+                matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) )

+            elif arity == 2:

+                if opExpr is not None:

+                    matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) )

+                else:

+                    matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) )

+            elif arity == 3:

+                matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \

+                            Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr )

+            else:

+                raise ValueError("operator must be unary (1), binary (2), or ternary (3)")

+        elif rightLeftAssoc == opAssoc.RIGHT:

+            if arity == 1:

+                # try to avoid LR with this extra test

+                if not isinstance(opExpr, Optional):

+                    opExpr = Optional(opExpr)

+                matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr )

+            elif arity == 2:

+                if opExpr is not None:

+                    matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) )

+                else:

+                    matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) )

+            elif arity == 3:

+                matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \

+                            Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr )

+            else:

+                raise ValueError("operator must be unary (1), binary (2), or ternary (3)")

+        else:

+            raise ValueError("operator must indicate right or left associativity")

+        if pa:

+            matchExpr.setParseAction( pa )

+        thisExpr << ( matchExpr | lastExpr )

+        lastExpr = thisExpr

+    ret << lastExpr

+    return ret

+

+dblQuotedString = Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*"').setName("string enclosed in double quotes")

+sglQuotedString = Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*'").setName("string enclosed in single quotes")

+quotedString = Regex(r'''(?:"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*")|(?:'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*')''').setName("quotedString using single or double quotes")

+unicodeString = Combine(_L('u') + quotedString.copy())

+

+def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString.copy()):

+    """Helper method for defining nested lists enclosed in opening and closing

+       delimiters ("(" and ")" are the default).

+

+       Parameters:

+        - opener - opening character for a nested list (default="("); can also be a pyparsing expression

+        - closer - closing character for a nested list (default=")"); can also be a pyparsing expression

+        - content - expression for items within the nested lists (default=None)

+        - ignoreExpr - expression for ignoring opening and closing delimiters (default=quotedString)

+

+       If an expression is not provided for the content argument, the nested

+       expression will capture all whitespace-delimited content between delimiters

+       as a list of separate values.

+

+       Use the C{ignoreExpr} argument to define expressions that may contain

+       opening or closing characters that should not be treated as opening

+       or closing characters for nesting, such as quotedString or a comment

+       expression.  Specify multiple expressions using an C{L{Or}} or C{L{MatchFirst}}.

+       The default is L{quotedString}, but if no expressions are to be ignored,

+       then pass C{None} for this argument.

+    """

+    if opener == closer:

+        raise ValueError("opening and closing strings cannot be the same")

+    if content is None:

+        if isinstance(opener,basestring) and isinstance(closer,basestring):

+            if len(opener) == 1 and len(closer)==1:

+                if ignoreExpr is not None:

+                    content = (Combine(OneOrMore(~ignoreExpr +

+                                    CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1))

+                                ).setParseAction(lambda t:t[0].strip()))

+                else:

+                    content = (empty.copy()+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS

+                                ).setParseAction(lambda t:t[0].strip()))

+            else:

+                if ignoreExpr is not None:

+                    content = (Combine(OneOrMore(~ignoreExpr + 

+                                    ~Literal(opener) + ~Literal(closer) +

+                                    CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))

+                                ).setParseAction(lambda t:t[0].strip()))

+                else:

+                    content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) +

+                                    CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))

+                                ).setParseAction(lambda t:t[0].strip()))

+        else:

+            raise ValueError("opening and closing arguments must be strings if no content expression is given")

+    ret = Forward()

+    if ignoreExpr is not None:

+        ret << Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) )

+    else:

+        ret << Group( Suppress(opener) + ZeroOrMore( ret | content )  + Suppress(closer) )

+    return ret

+

+def indentedBlock(blockStatementExpr, indentStack, indent=True):

+    """Helper method for defining space-delimited indentation blocks, such as

+       those used to define block statements in Python source code.

+

+       Parameters:

+        - blockStatementExpr - expression defining syntax of statement that

+            is repeated within the indented block

+        - indentStack - list created by caller to manage indentation stack

+            (multiple statementWithIndentedBlock expressions within a single grammar

+            should share a common indentStack)

+        - indent - boolean indicating whether block must be indented beyond the

+            the current level; set to False for block of left-most statements

+            (default=True)

+

+       A valid block must contain at least one C{blockStatement}.

+    """

+    def checkPeerIndent(s,l,t):

+        if l >= len(s): return

+        curCol = col(l,s)

+        if curCol != indentStack[-1]:

+            if curCol > indentStack[-1]:

+                raise ParseFatalException(s,l,"illegal nesting")

+            raise ParseException(s,l,"not a peer entry")

+

+    def checkSubIndent(s,l,t):

+        curCol = col(l,s)

+        if curCol > indentStack[-1]:

+            indentStack.append( curCol )

+        else:

+            raise ParseException(s,l,"not a subentry")

+

+    def checkUnindent(s,l,t):

+        if l >= len(s): return

+        curCol = col(l,s)

+        if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]):

+            raise ParseException(s,l,"not an unindent")

+        indentStack.pop()

+

+    NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress())

+    INDENT = Empty() + Empty().setParseAction(checkSubIndent)

+    PEER   = Empty().setParseAction(checkPeerIndent)

+    UNDENT = Empty().setParseAction(checkUnindent)

+    if indent:

+        smExpr = Group( Optional(NL) +

+            #~ FollowedBy(blockStatementExpr) +

+            INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT)

+    else:

+        smExpr = Group( Optional(NL) +

+            (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) )

+    blockStatementExpr.ignore(_bslash + LineEnd())

+    return smExpr

+

+alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]")

+punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]")

+

+anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:"))

+commonHTMLEntity = Combine(_L("&") + oneOf("gt lt amp nbsp quot").setResultsName("entity") +";").streamline()

+_htmlEntityMap = dict(zip("gt lt amp nbsp quot".split(),'><& "'))

+replaceHTMLEntity = lambda t : t.entity in _htmlEntityMap and _htmlEntityMap[t.entity] or None

+

+# it's easy to get these comment structures wrong - they're very common, so may as well make them available

+cStyleComment = Regex(r"/\*(?:[^*]*\*+)+?/").setName("C style comment")

+

+htmlComment = Regex(r"<!--[\s\S]*?-->")

+restOfLine = Regex(r".*").leaveWhitespace()

+dblSlashComment = Regex(r"\/\/(\\\n|.)*").setName("// comment")

+cppStyleComment = Regex(r"/(?:\*(?:[^*]*\*+)+?/|/[^\n]*(?:\n[^\n]*)*?(?:(?<!\\)|\Z))").setName("C++ style comment")

+

+javaStyleComment = cppStyleComment

+pythonStyleComment = Regex(r"#.*").setName("Python style comment")

+_noncomma = "".join( [ c for c in printables if c != "," ] )

+_commasepitem = Combine(OneOrMore(Word(_noncomma) +

+                                  Optional( Word(" \t") +

+                                            ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem")

+commaSeparatedList = delimitedList( Optional( quotedString.copy() | _commasepitem, default="") ).setName("commaSeparatedList")

+

+

+if __name__ == "__main__":

+

+    def test( teststring ):

+        try:

+            tokens = simpleSQL.parseString( teststring )

+            tokenlist = tokens.asList()

+            print (teststring + "->"   + str(tokenlist))

+            print ("tokens = "         + str(tokens))

+            print ("tokens.columns = " + str(tokens.columns))

+            print ("tokens.tables = "  + str(tokens.tables))

+            print (tokens.asXML("SQL",True))

+        except ParseBaseException:

+            err = sys.exc_info()[1]

+            print (teststring + "->")

+            print (err.line)

+            print (" "*(err.column-1) + "^")

+            print (err)

+        print()

+

+    selectToken    = CaselessLiteral( "select" )

+    fromToken      = CaselessLiteral( "from" )

+

+    ident          = Word( alphas, alphanums + "_$" )

+    columnName     = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )

+    columnNameList = Group( delimitedList( columnName ) )#.setName("columns")

+    tableName      = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )

+    tableNameList  = Group( delimitedList( tableName ) )#.setName("tables")

+    simpleSQL      = ( selectToken + \

+                     ( '*' | columnNameList ).setResultsName( "columns" ) + \

+                     fromToken + \

+                     tableNameList.setResultsName( "tables" ) )

+

+    test( "SELECT * from XYZZY, ABC" )

+    test( "select * from SYS.XYZZY" )

+    test( "Select A from Sys.dual" )

+    test( "Select AA,BB,CC from Sys.dual" )

+    test( "Select A, B, C from Sys.dual" )

+    test( "Select A, B, C from Sys.dual" )

+    test( "Xelect A, B, C from Sys.dual" )

+    test( "Select A, B, C frox Sys.dual" )

+    test( "Select" )

+    test( "Select ^^^ frox Sys.dual" )

+    test( "Select A, B, C from Sys.dual, Table2   " )

diff --git a/tenjin.py b/tenjin.py
new file mode 100644
index 0000000..db8cdde
--- /dev/null
+++ b/tenjin.py
@@ -0,0 +1,2118 @@
+##
+## $Release: 1.1.1 $
+## $Copyright: copyright(c) 2007-2012 kuwata-lab.com all rights reserved. $
+## $License: MIT License $
+##
+## Permission is hereby granted, free of charge, to any person obtaining
+## a copy of this software and associated documentation files (the
+## "Software"), to deal in the Software without restriction, including
+## without limitation the rights to use, copy, modify, merge, publish,
+## distribute, sublicense, and/or sell copies of the Software, and to
+## permit persons to whom the Software is furnished to do so, subject to
+## the following conditions:
+##
+## The above copyright notice and this permission notice shall be
+## included in all copies or substantial portions of the Software.
+##
+## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+## EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+## MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+## NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+## LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+## OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+## WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+##
+
+"""Very fast and light-weight template engine based embedded Python.
+   See User's Guide and examples for details.
+   http://www.kuwata-lab.com/tenjin/pytenjin-users-guide.html
+   http://www.kuwata-lab.com/tenjin/pytenjin-examples.html
+"""
+
+__version__  = "$Release: 1.1.1 $"[10:-2]
+__license__  = "$License: MIT License $"[10:-2]
+__all__      = ('Template', 'Engine', )
+
+
+import sys, os, re, time, marshal
+from time import time as _time
+from os.path import getmtime as _getmtime
+from os.path import isfile as _isfile
+random = pickle = unquote = None   # lazy import
+python3 = sys.version_info[0] == 3
+python2 = sys.version_info[0] == 2
+
+logger = None
+
+
+##
+## utilities
+##
+
+def _write_binary_file(filename, content):
+    global random
+    if random is None: from random import random
+    tmpfile = filename + str(random())[1:]
+    f = open(tmpfile, 'w+b')     # on windows, 'w+b' is preffered than 'wb'
+    try:
+        f.write(content)
+    finally:
+        f.close()
+    if os.path.exists(tmpfile):
+        try:
+            os.rename(tmpfile, filename)
+        except:
+            os.remove(filename)  # on windows, existing file should be removed before renaming
+            os.rename(tmpfile, filename)
+
+def _read_binary_file(filename):
+    f = open(filename, 'rb')
+    try:
+        return f.read()
+    finally:
+        f.close()
+
+codecs = None    # lazy import
+
+def _read_text_file(filename, encoding=None):
+    global codecs
+    if not codecs: import codecs
+    f = codecs.open(filename, encoding=(encoding or 'utf-8'))
+    try:
+        return f.read()
+    finally:
+        f.close()
+
+def _read_template_file(filename, encoding=None):
+    s = _read_binary_file(filename)          ## binary(=str)
+    if encoding: s = s.decode(encoding)      ## binary(=str) to unicode
+    return s
+
+_basestring = basestring
+_unicode    = unicode
+_bytes      = str
+
+def _ignore_not_found_error(f, default=None):
+    try:
+        return f()
+    except OSError, ex:
+        if ex.errno == 2:   # error: No such file or directory
+            return default
+        raise
+
+def create_module(module_name, dummy_func=None, **kwargs):
+    """ex. mod = create_module('tenjin.util')"""
+    try:
+        mod = type(sys)(module_name)
+    except:
+        # The module creation above does not work for Jython 2.5.2
+        import imp
+        mod = imp.new_module(module_name)
+
+    mod.__file__ = __file__
+    mod.__dict__.update(kwargs)
+    sys.modules[module_name] = mod
+    if dummy_func:
+        exec(dummy_func.func_code, mod.__dict__)
+    return mod
+
+def _raise(exception_class, *args):
+    raise exception_class(*args)
+
+
+##
+## helper method's module
+##
+
+def _dummy():
+    global unquote
+    unquote = None
+    global to_str, escape, echo, new_cycle, generate_tostrfunc
+    global start_capture, stop_capture, capture_as, captured_as, CaptureContext
+    global _p, _P, _decode_params
+
+    def generate_tostrfunc(encode=None, decode=None):
+        """Generate 'to_str' function with encode or decode encoding.
+           ex. generate to_str() function which encodes unicode into binary(=str).
+              to_str = tenjin.generate_tostrfunc(encode='utf-8')
+              repr(to_str(u'hoge'))  #=> 'hoge' (str)
+           ex. generate to_str() function which decodes binary(=str) into unicode.
+              to_str = tenjin.generate_tostrfunc(decode='utf-8')
+              repr(to_str('hoge'))   #=> u'hoge' (unicode)
+        """
+        if encode:
+            if decode:
+                raise ValueError("can't specify both encode and decode encoding.")
+            else:
+                def to_str(val,   _str=str, _unicode=unicode, _isa=isinstance, _encode=encode):
+                    """Convert val into string or return '' if None. Unicode will be encoded into binary(=str)."""
+                    if _isa(val, _str):     return val
+                    if val is None:         return ''
+                    #if _isa(val, _unicode): return val.encode(_encode)  # unicode to binary(=str)
+                    if _isa(val, _unicode):
+                        return val.encode(_encode)  # unicode to binary(=str)
+                    return _str(val)
+        else:
+            if decode:
+                def to_str(val,   _str=str, _unicode=unicode, _isa=isinstance, _decode=decode):
+                    """Convert val into string or return '' if None. Binary(=str) will be decoded into unicode."""
+                    #if _isa(val, _str):     return val.decode(_decode)  # binary(=str) to unicode
+                    if _isa(val, _str):
+                        return val.decode(_decode)
+                    if val is None:         return ''
+                    if _isa(val, _unicode): return val
+                    return _unicode(val)
+            else:
+                def to_str(val,   _str=str, _unicode=unicode, _isa=isinstance):
+                    """Convert val into string or return '' if None. Both binary(=str) and unicode will be retruned as-is."""
+                    if _isa(val, _str):     return val
+                    if val is None:         return ''
+                    if _isa(val, _unicode): return val
+                    return _str(val)
+        return to_str
+
+    to_str = generate_tostrfunc(encode='utf-8')  # or encode=None?
+
+    def echo(string):
+        """add string value into _buf. this is equivarent to '#{string}'."""
+        lvars = sys._getframe(1).f_locals   # local variables
+        lvars['_buf'].append(string)
+
+    def new_cycle(*values):
+        """Generate cycle object.
+           ex.
+             cycle = new_cycle('odd', 'even')
+             print(cycle())   #=> 'odd'
+             print(cycle())   #=> 'even'
+             print(cycle())   #=> 'odd'
+             print(cycle())   #=> 'even'
+        """
+        def gen(values):
+            i, n = 0, len(values)
+            while True:
+                yield values[i]
+                i = (i + 1) % n
+        return gen(values).next
+
+    class CaptureContext(object):
+
+        def __init__(self, name, store_to_context=True, lvars=None):
+            self.name  = name
+            self.store_to_context = store_to_context
+            self.lvars = lvars or sys._getframe(1).f_locals
+
+        def __enter__(self):
+            lvars = self.lvars
+            self._buf_orig = lvars['_buf']
+            lvars['_buf']    = _buf = []
+            lvars['_extend'] = _buf.extend
+            return self
+
+        def __exit__(self, *args):
+            lvars = self.lvars
+            _buf = lvars['_buf']
+            lvars['_buf']    = self._buf_orig
+            lvars['_extend'] = self._buf_orig.extend
+            lvars[self.name] = self.captured = ''.join(_buf)
+            if self.store_to_context and '_context' in lvars:
+                lvars['_context'][self.name] = self.captured
+
+        def __iter__(self):
+            self.__enter__()
+            yield self
+            self.__exit__()
+
+    def start_capture(varname=None, _depth=1):
+        """(obsolete) start capturing with name."""
+        lvars = sys._getframe(_depth).f_locals
+        capture_context = CaptureContext(varname, None, lvars)
+        lvars['_capture_context'] = capture_context
+        capture_context.__enter__()
+
+    def stop_capture(store_to_context=True, _depth=1):
+        """(obsolete) stop capturing and return the result of capturing.
+           if store_to_context is True then the result is stored into _context[varname].
+        """
+        lvars = sys._getframe(_depth).f_locals
+        capture_context = lvars.pop('_capture_context', None)
+        if not capture_context:
+            raise Exception('stop_capture(): start_capture() is not called before.')
+        capture_context.store_to_context = store_to_context
+        capture_context.__exit__()
+        return capture_context.captured
+
+    def capture_as(name, store_to_context=True):
+        """capture partial of template."""
+        return CaptureContext(name, store_to_context, sys._getframe(1).f_locals)
+
+    def captured_as(name, _depth=1):
+        """helper method for layout template.
+           if captured string is found then append it to _buf and return True,
+           else return False.
+        """
+        lvars = sys._getframe(_depth).f_locals   # local variables
+        if name in lvars:
+            _buf = lvars['_buf']
+            _buf.append(lvars[name])
+            return True
+        return False
+
+    def _p(arg):
+        """ex. '/show/'+_p("item['id']") => "/show/#{item['id']}" """
+        return '<`#%s#`>' % arg    # decoded into #{...} by preprocessor
+
+    def _P(arg):
+        """ex. '<b>%s</b>' % _P("item['id']") => "<b>${item['id']}</b>" """
+        return '<`$%s$`>' % arg    # decoded into ${...} by preprocessor
+
+    def _decode_params(s):
+        """decode <`#...#`> and <`$...$`> into #{...} and ${...}"""
+        global unquote
+        if unquote is None:
+            from urllib import unquote
+        dct = { 'lt':'<', 'gt':'>', 'amp':'&', 'quot':'"', '#039':"'", }
+        def unescape(s):
+            #return s.replace('&lt;', '<').replace('&gt;', '>').replace('&quot;', '"').replace('&#039;', "'").replace('&amp;',  '&')
+            return re.sub(r'&(lt|gt|quot|amp|#039);',  lambda m: dct[m.group(1)],  s)
+        s = to_str(s)
+        s = re.sub(r'%3C%60%23(.*?)%23%60%3E', lambda m: '#{%s}' % unquote(m.group(1)), s)
+        s = re.sub(r'%3C%60%24(.*?)%24%60%3E', lambda m: '${%s}' % unquote(m.group(1)), s)
+        s = re.sub(r'&lt;`#(.*?)#`&gt;',   lambda m: '#{%s}' % unescape(m.group(1)), s)
+        s = re.sub(r'&lt;`\$(.*?)\$`&gt;', lambda m: '${%s}' % unescape(m.group(1)), s)
+        s = re.sub(r'<`#(.*?)#`>', r'#{\1}', s)
+        s = re.sub(r'<`\$(.*?)\$`>', r'${\1}', s)
+        return s
+
+helpers = create_module('tenjin.helpers', _dummy, sys=sys, re=re)
+helpers.__all__ = ['to_str', 'escape', 'echo', 'new_cycle', 'generate_tostrfunc',
+                   'start_capture', 'stop_capture', 'capture_as', 'captured_as',
+                   'not_cached', 'echo_cached', 'cache_as',
+                   '_p', '_P', '_decode_params',
+                   ]
+generate_tostrfunc = helpers.generate_tostrfunc
+
+
+##
+## escaped module
+##
+def _dummy():
+    global is_escaped, as_escaped, to_escaped
+    global Escaped, EscapedStr, EscapedUnicode
+    global __all__
+    __all__ = ('is_escaped', 'as_escaped', 'to_escaped', ) #'Escaped', 'EscapedStr',
+
+    class Escaped(object):
+        """marking class that object is already escaped."""
+        pass
+
+    def is_escaped(value):
+        """return True if value is marked as escaped, else return False."""
+        return isinstance(value, Escaped)
+
+    class EscapedStr(str, Escaped):
+        """string class which is marked as escaped."""
+        pass
+
+    class EscapedUnicode(unicode, Escaped):
+        """unicode class which is marked as escaped."""
+        pass
+
+    def as_escaped(s):
+        """mark string as escaped, without escaping."""
+        if isinstance(s, str):     return EscapedStr(s)
+        if isinstance(s, unicode): return EscapedUnicode(s)
+        raise TypeError("as_escaped(%r): expected str or unicode." % (s, ))
+
+    def to_escaped(value):
+        """convert any value into string and escape it.
+           if value is already marked as escaped, don't escape it."""
+        if hasattr(value, '__html__'):
+            value = value.__html__()
+        if is_escaped(value):
+            #return value     # EscapedUnicode should be convered into EscapedStr
+            return as_escaped(_helpers.to_str(value))
+        #if isinstance(value, _basestring):
+        #    return as_escaped(_helpers.escape(value))
+        return as_escaped(_helpers.escape(_helpers.to_str(value)))
+
+escaped = create_module('tenjin.escaped', _dummy, _helpers=helpers)
+
+
+##
+## module for html
+##
+def _dummy():
+    global escape_html, escape_xml, escape, tagattr, tagattrs, _normalize_attrs
+    global checked, selected, disabled, nl2br, text2html, nv, js_link
+
+    #_escape_table = { '&': '&amp;', '<': '&lt;', '>': '&gt;', '"': '&quot;', "'": '&#39;' }
+    #_escape_pattern = re.compile(r'[&<>"]')
+    ##_escape_callable = lambda m: _escape_table[m.group(0)]
+    ##_escape_callable = lambda m: _escape_table.__get__(m.group(0))
+    #_escape_get     = _escape_table.__getitem__
+    #_escape_callable = lambda m: _escape_get(m.group(0))
+    #_escape_sub     = _escape_pattern.sub
+
+    #def escape_html(s):
+    #    return s                                          # 3.02
+
+    #def escape_html(s):
+    #    return _escape_pattern.sub(_escape_callable, s)   # 6.31
+
+    #def escape_html(s):
+    #    return _escape_sub(_escape_callable, s)           # 6.01
+
+    #def escape_html(s, _p=_escape_pattern, _f=_escape_callable):
+    #    return _p.sub(_f, s)                              # 6.27
+
+    #def escape_html(s, _sub=_escape_pattern.sub, _callable=_escape_callable):
+    #    return _sub(_callable, s)                         # 6.04
+
+    #def escape_html(s):
+    #    s = s.replace('&', '&amp;')
+    #    s = s.replace('<', '&lt;')
+    #    s = s.replace('>', '&gt;')
+    #    s = s.replace('"', '&quot;')
+    #    return s                                          # 5.83
+
+    def escape_html(s):
+        """Escape '&', '<', '>', '"' into '&amp;', '&lt;', '&gt;', '&quot;'."""
+        return s.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;').replace('"', '&quot;').replace("'", '&#39;')   # 5.72
+
+    escape_xml = escape_html   # for backward compatibility
+
+    def tagattr(name, expr, value=None, escape=True):
+        """(experimental) Return ' name="value"' if expr is true value, else '' (empty string).
+           If value is not specified, expr is used as value instead."""
+        if not expr and expr != 0: return _escaped.as_escaped('')
+        if value is None: value = expr
+        if escape: value = _escaped.to_escaped(value)
+        return _escaped.as_escaped(' %s="%s"' % (name, value))
+
+    def tagattrs(**kwargs):
+        """(experimental) built html tag attribtes.
+           ex.
+           >>> tagattrs(klass='main', size=20)
+           ' class="main" size="20"'
+           >>> tagattrs(klass='', size=0)
+           ''
+        """
+        kwargs = _normalize_attrs(kwargs)
+        esc = _escaped.to_escaped
+        s = ''.join([ ' %s="%s"' % (k, esc(v)) for k, v in kwargs.iteritems() if v or v == 0 ])
+        return _escaped.as_escaped(s)
+
+    def _normalize_attrs(kwargs):
+        if 'klass'    in kwargs: kwargs['class']    = kwargs.pop('klass')
+        if 'checked'  in kwargs: kwargs['checked']  = kwargs.pop('checked')  and 'checked'  or None
+        if 'selected' in kwargs: kwargs['selected'] = kwargs.pop('selected') and 'selected' or None
+        if 'disabled' in kwargs: kwargs['disabled'] = kwargs.pop('disabled') and 'disabled' or None
+        return kwargs
+
+    def checked(expr):
+        """return ' checked="checked"' if expr is true."""
+        return _escaped.as_escaped(expr and ' checked="checked"' or '')
+
+    def selected(expr):
+        """return ' selected="selected"' if expr is true."""
+        return _escaped.as_escaped(expr and ' selected="selected"' or '')
+
+    def disabled(expr):
+        """return ' disabled="disabled"' if expr is true."""
+        return _escaped.as_escaped(expr and ' disabled="disabled"' or '')
+
+    def nl2br(text):
+        """replace "\n" to "<br />\n" and return it."""
+        if not text:
+            return _escaped.as_escaped('')
+        return _escaped.as_escaped(text.replace('\n', '<br />\n'))
+
+    def text2html(text, use_nbsp=True):
+        """(experimental) escape xml characters, replace "\n" to "<br />\n", and return it."""
+        if not text:
+            return _escaped.as_escaped('')
+        s = _escaped.to_escaped(text)
+        if use_nbsp: s = s.replace('  ', ' &nbsp;')
+        #return nl2br(s)
+        s = s.replace('\n', '<br />\n')
+        return _escaped.as_escaped(s)
+
+    def nv(name, value, sep=None, **kwargs):
+        """(experimental) Build name and value attributes.
+           ex.
+           >>> nv('rank', 'A')
+           'name="rank" value="A"'
+           >>> nv('rank', 'A', '.')
+           'name="rank" value="A" id="rank.A"'
+           >>> nv('rank', 'A', '.', checked=True)
+           'name="rank" value="A" id="rank.A" checked="checked"'
+           >>> nv('rank', 'A', '.', klass='error', style='color:red')
+           'name="rank" value="A" id="rank.A" class="error" style="color:red"'
+        """
+        name  = _escaped.to_escaped(name)
+        value = _escaped.to_escaped(value)
+        s = sep and 'name="%s" value="%s" id="%s"' % (name, value, name+sep+value) \
+                or  'name="%s" value="%s"'         % (name, value)
+        html = kwargs and s + tagattrs(**kwargs) or s
+        return _escaped.as_escaped(html)
+
+    def js_link(label, onclick, **kwargs):
+        s = kwargs and tagattrs(**kwargs) or ''
+        html = '<a href="javascript:undefined" onclick="%s;return false"%s>%s</a>' % \
+                  (_escaped.to_escaped(onclick), s, _escaped.to_escaped(label))
+        return _escaped.as_escaped(html)
+
+html = create_module('tenjin.html', _dummy, helpers=helpers, _escaped=escaped)
+helpers.escape = html.escape_html
+helpers.html = html   # for backward compatibility
+sys.modules['tenjin.helpers.html'] = html
+
+
+##
+## utility function to set default encoding of template files
+##
+_template_encoding = (None, 'utf-8')    # encodings for decode and encode
+
+def set_template_encoding(decode=None, encode=None):
+    """Set default encoding of template files.
+       This should be called before importing helper functions.
+       ex.
+          ## I like template files to be unicode-base like Django.
+          import tenjin
+          tenjin.set_template_encoding('utf-8')  # should be called before importing helpers
+          from tenjin.helpers import *
+    """
+    global _template_encoding
+    if _template_encoding == (decode, encode):
+        return
+    if decode and encode:
+        raise ValueError("set_template_encoding(): cannot specify both decode and encode.")
+    if not decode and not encode:
+        raise ValueError("set_template_encoding(): decode or encode should be specified.")
+    if decode:
+        Template.encoding = decode    # unicode base template
+        helpers.to_str = helpers.generate_tostrfunc(decode=decode)
+    else:
+        Template.encoding = None      # binary base template
+        helpers.to_str = helpers.generate_tostrfunc(encode=encode)
+    _template_encoding = (decode, encode)
+
+
+##
+## Template class
+##
+
+class TemplateSyntaxError(SyntaxError):
+
+    def build_error_message(self):
+        ex = self
+        if not ex.text:
+            return self.args[0]
+        return ''.join([
+            "%s:%s:%s: %s\n" % (ex.filename, ex.lineno, ex.offset, ex.msg, ),
+            "%4d: %s\n"      % (ex.lineno, ex.text.rstrip(), ),
+            "     %s^\n"     % (' ' * ex.offset, ),
+        ])
+
+
+class Template(object):
+    """Convert and evaluate embedded python string.
+       See User's Guide and examples for details.
+       http://www.kuwata-lab.com/tenjin/pytenjin-users-guide.html
+       http://www.kuwata-lab.com/tenjin/pytenjin-examples.html
+    """
+
+    ## default value of attributes
+    filename   = None
+    encoding   = None
+    escapefunc = 'escape'
+    tostrfunc  = 'to_str'
+    indent     = 4
+    preamble   = None    # "_buf = []; _expand = _buf.expand; _to_str = to_str; _escape = escape"
+    postamble  = None    # "print ''.join(_buf)"
+    smarttrim  = None
+    args       = None
+    timestamp  = None
+    trace      = False   # if True then '<!-- begin: file -->' and '<!-- end: file -->' are printed
+
+    def __init__(self, filename=None, encoding=None, input=None, escapefunc=None, tostrfunc=None,
+                       indent=None, preamble=None, postamble=None, smarttrim=None, trace=None):
+        """Initailizer of Template class.
+
+           filename:str (=None)
+             Filename to convert (optional). If None, no convert.
+           encoding:str (=None)
+             Encoding name. If specified, template string is converted into
+             unicode object internally.
+             Template.render() returns str object if encoding is None,
+             else returns unicode object if encoding name is specified.
+           input:str (=None)
+             Input string. In other words, content of template file.
+             Template file will not be read if this argument is specified.
+           escapefunc:str (='escape')
+             Escape function name.
+           tostrfunc:str (='to_str')
+             'to_str' function name.
+           indent:int (=4)
+             Indent width.
+           preamble:str or bool (=None)
+             Preamble string which is inserted into python code.
+             If true, '_buf = []; ' is used insated.
+           postamble:str or bool (=None)
+             Postamble string which is appended to python code.
+             If true, 'print("".join(_buf))' is used instead.
+           smarttrim:bool (=None)
+             If True then "<div>\\n#{_context}\\n</div>" is parsed as
+             "<div>\\n#{_context}</div>".
+        """
+        if encoding   is not None:  self.encoding   = encoding
+        if escapefunc is not None:  self.escapefunc = escapefunc
+        if tostrfunc  is not None:  self.tostrfunc  = tostrfunc
+        if indent     is not None:  self.indent     = indent
+        if preamble   is not None:  self.preamble   = preamble
+        if postamble  is not None:  self.postamble  = postamble
+        if smarttrim  is not None:  self.smarttrim  = smarttrim
+        if trace      is not None:  self.trace      = trace
+        #
+        if preamble  is True:  self.preamble  = "_buf = []"
+        if postamble is True:  self.postamble = "print(''.join(_buf))"
+        if input:
+            self.convert(input, filename)
+            self.timestamp = False      # False means 'file not exist' (= Engine should not check timestamp of file)
+        elif filename:
+            self.convert_file(filename)
+        else:
+            self._reset()
+
+    def _reset(self, input=None, filename=None):
+        self.script   = None
+        self.bytecode = None
+        self.input    = input
+        self.filename = filename
+        if input != None:
+            i = input.find("\n")
+            if i < 0:
+                self.newline = "\n"   # or None
+            elif len(input) >= 2 and input[i-1] == "\r":
+                self.newline = "\r\n"
+            else:
+                self.newline = "\n"
+        self._localvars_assignments_added = False
+
+    def _localvars_assignments(self):
+        return "_extend=_buf.extend;_to_str=%s;_escape=%s; " % (self.tostrfunc, self.escapefunc)
+
+    def before_convert(self, buf):
+        if self.preamble:
+            eol = self.input.startswith('<?py') and "\n" or "; "
+            buf.append(self.preamble + eol)
+
+    def after_convert(self, buf):
+        if self.postamble:
+            if buf and not buf[-1].endswith("\n"):
+                buf.append("\n")
+            buf.append(self.postamble + "\n")
+
+    def convert_file(self, filename):
+        """Convert file into python script and return it.
+           This is equivarent to convert(open(filename).read(), filename).
+        """
+        input = _read_template_file(filename)
+        return self.convert(input, filename)
+
+    def convert(self, input, filename=None):
+        """Convert string in which python code is embedded into python script and return it.
+
+           input:str
+             Input string to convert into python code.
+           filename:str (=None)
+             Filename of input. this is optional but recommended to report errors.
+        """
+        if self.encoding and isinstance(input, str):
+            input = input.decode(self.encoding)
+        self._reset(input, filename)
+        buf = []
+        self.before_convert(buf)
+        self.parse_stmts(buf, input)
+        self.after_convert(buf)
+        script = ''.join(buf)
+        self.script = script
+        return script
+
+    STMT_PATTERN = (r'<\?py( |\t|\r?\n)(.*?) ?\?>([ \t]*\r?\n)?', re.S)
+
+    def stmt_pattern(self):
+        pat = self.STMT_PATTERN
+        if isinstance(pat, tuple):
+            pat = self.__class__.STMT_PATTERN = re.compile(*pat)
+        return pat
+
+    def parse_stmts(self, buf, input):
+        if not input: return
+        rexp = self.stmt_pattern()
+        is_bol = True
+        index = 0
+        for m in rexp.finditer(input):
+            mspace, code, rspace = m.groups()
+            #mspace, close, rspace = m.groups()
+            #code = input[m.start()+4+len(mspace):m.end()-len(close)-(rspace and len(rspace) or 0)]
+            text = input[index:m.start()]
+            index = m.end()
+            ## detect spaces at beginning of line
+            lspace = None
+            if text == '':
+                if is_bol:
+                    lspace = ''
+            elif text[-1] == '\n':
+                lspace = ''
+            else:
+                rindex = text.rfind('\n')
+                if rindex < 0:
+                    if is_bol and text.isspace():
+                        lspace, text = text, ''
+                else:
+                    s = text[rindex+1:]
+                    if s.isspace():
+                        lspace, text = s, text[:rindex+1]
+            #is_bol = rspace is not None
+            ## add text, spaces, and statement
+            self.parse_exprs(buf, text, is_bol)
+            is_bol = rspace is not None
+            #if mspace == "\n":
+            if mspace and mspace.endswith("\n"):
+                code = "\n" + (code or "")
+            #if rspace == "\n":
+            if rspace and rspace.endswith("\n"):
+                code = (code or "") + "\n"
+            if code:
+                code = self.statement_hook(code)
+                m = self._match_to_args_declaration(code)
+                if m:
+                    self._add_args_declaration(buf, m)
+                else:
+                    self.add_stmt(buf, code)
+        rest = input[index:]
+        if rest:
+            self.parse_exprs(buf, rest)
+        self._arrange_indent(buf)
+
+    def statement_hook(self, stmt):
+        """expand macros and parse '#@ARGS' in a statement."""
+        return stmt.replace("\r\n", "\n")   # Python can't handle "\r\n" in code
+
+    def _match_to_args_declaration(self, stmt):
+        if self.args is not None:
+            return None
+        args_pattern = r'^ *#@ARGS(?:[ \t]+(.*?))?$'
+        return re.match(args_pattern, stmt)
+
+    def _add_args_declaration(self, buf, m):
+        arr = (m.group(1) or '').split(',')
+        args = [];  declares = []
+        for s in arr:
+            arg = s.strip()
+            if not s: continue
+            if not re.match('^[a-zA-Z_]\w*$', arg):
+                raise ValueError("%r: invalid template argument." % arg)
+            args.append(arg)
+            declares.append("%s = _context.get('%s'); " % (arg, arg))
+        self.args = args
+        #nl = stmt[m.end():]
+        #if nl: declares.append(nl)
+        buf.append(''.join(declares) + "\n")
+
+    s = '(?:\{.*?\}.*?)*'
+    EXPR_PATTERN = (r'#\{(.*?'+s+r')\}|\$\{(.*?'+s+r')\}|\{=(?:=(.*?)=|(.*?))=\}', re.S)
+    del s
+
+    def expr_pattern(self):
+        pat = self.EXPR_PATTERN
+        if isinstance(pat, tuple):
+            self.__class__.EXPR_PATTERN = pat = re.compile(*pat)
+        return pat
+
+    def get_expr_and_flags(self, match):
+        expr1, expr2, expr3, expr4 = match.groups()
+        if expr1 is not None: return expr1, (False, True)   # not escape,  call to_str
+        if expr2 is not None: return expr2, (True,  True)   # call escape, call to_str
+        if expr3 is not None: return expr3, (False, True)   # not escape,  call to_str
+        if expr4 is not None: return expr4, (True,  True)   # call escape, call to_str
+
+    def parse_exprs(self, buf, input, is_bol=False):
+        buf2 = []
+        self._parse_exprs(buf2, input, is_bol)
+        if buf2:
+            buf.append(''.join(buf2))
+
+    def _parse_exprs(self, buf, input, is_bol=False):
+        if not input: return
+        self.start_text_part(buf)
+        rexp = self.expr_pattern()
+        smarttrim = self.smarttrim
+        nl = self.newline
+        nl_len  = len(nl)
+        pos = 0
+        for m in rexp.finditer(input):
+            start = m.start()
+            text  = input[pos:start]
+            pos   = m.end()
+            expr, flags = self.get_expr_and_flags(m)
+            #
+            if text:
+                self.add_text(buf, text)
+            self.add_expr(buf, expr, *flags)
+            #
+            if smarttrim:
+                flag_bol = text.endswith(nl) or not text and (start > 0  or is_bol)
+                if flag_bol and not flags[0] and input[pos:pos+nl_len] == nl:
+                    pos += nl_len
+                    buf.append("\n")
+        if smarttrim:
+            if buf and buf[-1] == "\n":
+                buf.pop()
+        rest = input[pos:]
+        if rest:
+            self.add_text(buf, rest, True)
+        self.stop_text_part(buf)
+        if input[-1] == '\n':
+            buf.append("\n")
+
+    def start_text_part(self, buf):
+        self._add_localvars_assignments_to_text(buf)
+        #buf.append("_buf.extend((")
+        buf.append("_extend((")
+
+    def _add_localvars_assignments_to_text(self, buf):
+        if not self._localvars_assignments_added:
+            self._localvars_assignments_added = True
+            buf.append(self._localvars_assignments())
+
+    def stop_text_part(self, buf):
+        buf.append("));")
+
+    def _quote_text(self, text):
+        text = re.sub(r"(['\\\\])", r"\\\1", text)
+        text = text.replace("\r\n", "\\r\n")
+        return text
+
+    def add_text(self, buf, text, encode_newline=False):
+        if not text: return
+        use_unicode = self.encoding and python2
+        buf.append(use_unicode and "u'''" or "'''")
+        text = self._quote_text(text)
+        if   not encode_newline:    buf.extend((text,       "''', "))
+        elif text.endswith("\r\n"): buf.extend((text[0:-2], "\\r\\n''', "))
+        elif text.endswith("\n"):   buf.extend((text[0:-1], "\\n''', "))
+        else:                       buf.extend((text,       "''', "))
+
+    _add_text = add_text
+
+    def add_expr(self, buf, code, *flags):
+        if not code or code.isspace(): return
+        flag_escape, flag_tostr = flags
+        if not self.tostrfunc:  flag_tostr  = False
+        if not self.escapefunc: flag_escape = False
+        if flag_tostr and flag_escape: s1, s2 = "_escape(_to_str(", ")), "
+        elif flag_tostr:               s1, s2 = "_to_str(", "), "
+        elif flag_escape:              s1, s2 = "_escape(", "), "
+        else:                          s1, s2 = "(", "), "
+        buf.extend((s1, code, s2, ))
+
+    def add_stmt(self, buf, code):
+        if not code: return
+        lines = code.splitlines(True)   # keep "\n"
+        if lines[-1][-1] != "\n":
+            lines[-1] = lines[-1] + "\n"
+        buf.extend(lines)
+        self._add_localvars_assignments_to_stmts(buf)
+
+    def _add_localvars_assignments_to_stmts(self, buf):
+        if self._localvars_assignments_added:
+            return
+        for index, stmt in enumerate(buf):
+            if not re.match(r'^[ \t]*(?:\#|_buf ?= ?\[\]|from __future__)', stmt):
+                break
+        else:
+            return
+        self._localvars_assignments_added = True
+        if re.match(r'^[ \t]*(if|for|while|def|with|class)\b', stmt):
+            buf.insert(index, self._localvars_assignments() + "\n")
+        else:
+            buf[index] = self._localvars_assignments() + buf[index]
+
+
+    _START_WORDS = dict.fromkeys(('for', 'if', 'while', 'def', 'try:', 'with', 'class'), True)
+    _END_WORDS   = dict.fromkeys(('#end', '#endfor', '#endif', '#endwhile', '#enddef', '#endtry', '#endwith', '#endclass'), True)
+    _CONT_WORDS  = dict.fromkeys(('elif', 'else:', 'except', 'except:', 'finally:'), True)
+    _WORD_REXP   = re.compile(r'\S+')
+
+    depth = -1
+
+    ##
+    ## ex.
+    ##   input = r"""
+    ##   if items:
+    ##   _buf.extend(('<ul>\n', ))
+    ##   i = 0
+    ##   for item in items:
+    ##   i += 1
+    ##   _buf.extend(('<li>', to_str(item), '</li>\n', ))
+    ##   #endfor
+    ##   _buf.extend(('</ul>\n', ))
+    ##   #endif
+    ##   """[1:]
+    ##   lines = input.splitlines(True)
+    ##   block = self.parse_lines(lines)
+    ##      #=>  [ "if items:\n",
+    ##             [ "_buf.extend(('<ul>\n', ))\n",
+    ##               "i = 0\n",
+    ##               "for item in items:\n",
+    ##               [ "i += 1\n",
+    ##                 "_buf.extend(('<li>', to_str(item), '</li>\n', ))\n",
+    ##               ],
+    ##               "#endfor\n",
+    ##               "_buf.extend(('</ul>\n', ))\n",
+    ##             ],
+    ##             "#endif\n",
+    ##           ]
+    def parse_lines(self, lines):
+        block = []
+        try:
+            self._parse_lines(lines.__iter__(), False, block, 0)
+        except StopIteration:
+            if self.depth > 0:
+                fname, linenum, colnum, linetext = self.filename, len(lines), None, None
+                raise TemplateSyntaxError("unexpected EOF.", (fname, linenum, colnum, linetext))
+        else:
+            pass
+        return block
+
+    def _parse_lines(self, lines_iter, end_block, block, linenum):
+        if block is None: block = []
+        _START_WORDS = self._START_WORDS
+        _END_WORDS   = self._END_WORDS
+        _CONT_WORDS  = self._CONT_WORDS
+        _WORD_REXP   = self._WORD_REXP
+        get_line = lines_iter.next
+        while True:
+            line = get_line()
+            linenum += line.count("\n")
+            m = _WORD_REXP.search(line)
+            if not m:
+                block.append(line)
+                continue
+            word = m.group(0)
+            if word in _END_WORDS:
+                if word != end_block and word != '#end':
+                    if end_block is False:
+                        msg = "'%s' found but corresponding statement is missing." % (word, )
+                    else:
+                        msg = "'%s' expected but got '%s'." % (end_block, word)
+                    colnum = m.start() + 1
+                    raise TemplateSyntaxError(msg, (self.filename, linenum, colnum, line))
+                return block, line, None, linenum
+            elif line.endswith(':\n') or line.endswith(':\r\n'):
+                if word in _CONT_WORDS:
+                    return block, line, word, linenum
+                elif word in _START_WORDS:
+                    block.append(line)
+                    self.depth += 1
+                    cont_word = None
+                    try:
+                        child_block, line, cont_word, linenum = \
+                            self._parse_lines(lines_iter, '#end'+word, [], linenum)
+                        block.extend((child_block, line, ))
+                        while cont_word:   # 'elif' or 'else:'
+                            child_block, line, cont_word, linenum = \
+                                self._parse_lines(lines_iter, '#end'+word, [], linenum)
+                            block.extend((child_block, line, ))
+                    except StopIteration:
+                        msg = "'%s' is not closed." % (cont_word or word)
+                        colnum = m.start() + 1
+                        raise TemplateSyntaxError(msg, (self.filename, linenum, colnum, line))
+                    self.depth -= 1
+                else:
+                    block.append(line)
+            else:
+                block.append(line)
+        assert "unreachable"
+
+    def _join_block(self, block, buf, depth):
+        indent = ' ' * (self.indent * depth)
+        for line in block:
+            if isinstance(line, list):
+                self._join_block(line, buf, depth+1)
+            elif line.isspace():
+                buf.append(line)
+            else:
+                buf.append(indent + line.lstrip())
+
+    def _arrange_indent(self, buf):
+        """arrange indentation of statements in buf"""
+        block = self.parse_lines(buf)
+        buf[:] = []
+        self._join_block(block, buf, 0)
+
+
+    def render(self, context=None, globals=None, _buf=None):
+        """Evaluate python code with context dictionary.
+           If _buf is None then return the result of evaluation as str,
+           else return None.
+
+           context:dict (=None)
+             Context object to evaluate. If None then new dict is created.
+           globals:dict (=None)
+             Global object. If None then globals() is used.
+           _buf:list (=None)
+             If None then new list is created.
+        """
+        if context is None:
+            locals = context = {}
+        elif self.args is None:
+            locals = context.copy()
+        else:
+            locals = {}
+            if '_engine' in context:
+                context.get('_engine').hook_context(locals)
+        locals['_context'] = context
+        if globals is None:
+            globals = sys._getframe(1).f_globals
+        bufarg = _buf
+        if _buf is None:
+            _buf = []
+        locals['_buf'] = _buf
+        if not self.bytecode:
+            self.compile()
+        if self.trace:
+            _buf.append("<!-- ***** begin: %s ***** -->\n" % self.filename)
+            exec(self.bytecode, globals, locals)
+            _buf.append("<!-- ***** end: %s ***** -->\n" % self.filename)
+        else:
+            exec(self.bytecode, globals, locals)
+        if bufarg is not None:
+            return bufarg
+        elif not logger:
+            return ''.join(_buf)
+        else:
+            try:
+                return ''.join(_buf)
+            except UnicodeDecodeError, ex:
+                logger.error("[tenjin.Template] " + str(ex))
+                logger.error("[tenjin.Template] (_buf=%r)" % (_buf, ))
+                raise
+
+    def compile(self):
+        """compile self.script into self.bytecode"""
+        self.bytecode = compile(self.script, self.filename or '(tenjin)', 'exec')
+
+
+##
+## preprocessor class
+##
+
+class Preprocessor(Template):
+    """Template class for preprocessing."""
+
+    STMT_PATTERN = (r'<\?PY( |\t|\r?\n)(.*?) ?\?>([ \t]*\r?\n)?', re.S)
+
+    EXPR_PATTERN = (r'#\{\{(.*?)\}\}|\$\{\{(.*?)\}\}|\{#=(?:=(.*?)=|(.*?))=#\}', re.S)
+
+    def add_expr(self, buf, code, *flags):
+        if not code or code.isspace():
+            return
+        code = "_decode_params(%s)" % code
+        Template.add_expr(self, buf, code, *flags)
+
+
+class TemplatePreprocessor(object):
+    factory = Preprocessor
+
+    def __init__(self, factory=None):
+        if factory is not None: self.factory = factory
+        self.globals = sys._getframe(1).f_globals
+
+    def __call__(self, input, **kwargs):
+        filename = kwargs.get('filename')
+        context  = kwargs.get('context') or {}
+        globals  = kwargs.get('globals') or self.globals
+        template = self.factory()
+        template.convert(input, filename)
+        return template.render(context, globals=globals)
+
+
+class TrimPreprocessor(object):
+
+    _rexp     = re.compile(r'^[ \t]+<', re.M)
+    _rexp_all = re.compile(r'^[ \t]+',  re.M)
+
+    def __init__(self, all=False):
+        self.all = all
+
+    def __call__(self, input, **kwargs):
+        if self.all:
+            return self._rexp_all.sub('', input)
+        else:
+            return self._rexp.sub('<', input)
+
+
+class PrefixedLinePreprocessor(object):
+
+    def __init__(self, prefix='::(?=[ \t]|$)'):
+        self.prefix = prefix
+        self.regexp = re.compile(r'^([ \t]*)' + prefix + r'(.*)', re.M)
+
+    def convert_prefixed_lines(self, text):
+        fn = lambda m: "%s<?py%s ?>" % (m.group(1), m.group(2))
+        return self.regexp.sub(fn, text)
+
+    STMT_REXP = re.compile(r'<\?py\s.*?\?>', re.S)
+
+    def __call__(self, input, **kwargs):
+        buf = []; append = buf.append
+        pos = 0
+        for m in self.STMT_REXP.finditer(input):
+            text = input[pos:m.start()]
+            stmt = m.group(0)
+            pos = m.end()
+            if text: append(self.convert_prefixed_lines(text))
+            append(stmt)
+        rest = input[pos:]
+        if rest: append(self.convert_prefixed_lines(rest))
+        return "".join(buf)
+
+
+class ParseError(Exception):
+    pass
+
+
+class JavaScriptPreprocessor(object):
+
+    def __init__(self, **attrs):
+        self._attrs = attrs
+
+    def __call__(self, input, **kwargs):
+        return self.parse(input, kwargs.get('filename'))
+
+    def parse(self, input, filename=None):
+        buf = []
+        self._parse_chunks(input, buf, filename)
+        return ''.join(buf)
+
+    CHUNK_REXP = re.compile(r'(?:^( *)<|<)!-- *#(?:JS: (\$?\w+(?:\.\w+)*\(.*?\))|/JS:?) *-->([ \t]*\r?\n)?', re.M)
+
+    def _scan_chunks(self, input, filename):
+        rexp = self.CHUNK_REXP
+        pos = 0
+        curr_funcdecl = None
+        for m in rexp.finditer(input):
+            lspace, funcdecl, rspace = m.groups()
+            text = input[pos:m.start()]
+            pos = m.end()
+            if funcdecl:
+                if curr_funcdecl:
+                    raise ParseError("%s is nested in %s. (file: %s, line: %s)" % \
+                                         (funcdecl, curr_funcdecl, filename, _linenum(input, m.start()), ))
+                curr_funcdecl = funcdecl
+            else:
+                if not curr_funcdecl:
+                    raise ParseError("unexpected '<!-- #/JS -->'. (file: %s, line: %s)" % \
+                                         (filename, _linenum(input, m.start()), ))
+                curr_funcdecl = None
+            yield text, lspace, funcdecl, rspace, False
+        if curr_funcdecl:
+            raise ParseError("%s is not closed by '<!-- #/JS -->'. (file: %s, line: %s)" % \
+                                 (curr_funcdecl, filename, _linenum(input, m.start()), ))
+        rest = input[pos:]
+        yield rest, None, None, None, True
+
+    def _parse_chunks(self, input, buf, filename=None):
+        if not input: return
+        stag = '<script'
+        if self._attrs:
+            for k in self._attrs:
+                stag = "".join((stag, ' ', k, '="', self._attrs[k], '"'))
+        stag += '>'
+        etag = '</script>'
+        for text, lspace, funcdecl, rspace, end_p in self._scan_chunks(input, filename):
+            if end_p: break
+            if funcdecl:
+                buf.append(text)
+                if re.match(r'^\$?\w+\(', funcdecl):
+                    buf.extend((lspace or '', stag, 'function ', funcdecl, "{var _buf='';", rspace or ''))
+                else:
+                    m = re.match(r'(.+?)\((.*)\)', funcdecl)
+                    buf.extend((lspace or '', stag, m.group(1), '=function(', m.group(2), "){var _buf='';", rspace or ''))
+            else:
+                self._parse_stmts(text, buf)
+                buf.extend((lspace or '', "return _buf;};", etag, rspace or ''))
+            #
+        buf.append(text)
+
+    STMT_REXP = re.compile(r'(?:^( *)<|<)\?js(\s.*?) ?\?>([ \t]*\r?\n)?', re.M | re.S)
+
+    def _scan_stmts(self, input):
+        rexp = self.STMT_REXP
+        pos = 0
+        for m in rexp.finditer(input):
+            lspace, code, rspace = m.groups()
+            text = input[pos:m.start()]
+            pos = m.end()
+            yield text, lspace, code, rspace, False
+        rest = input[pos:]
+        yield rest, None, None, None, True
+
+    def _parse_stmts(self, input, buf):
+        if not input: return
+        for text, lspace, code, rspace, end_p in self._scan_stmts(input):
+            if end_p: break
+            if lspace is not None and rspace is not None:
+                self._parse_exprs(text, buf)
+                buf.extend((lspace, code, rspace))
+            else:
+                if lspace:
+                    text += lspace
+                self._parse_exprs(text, buf)
+                buf.append(code)
+                if rspace:
+                    self._parse_exprs(rspace, buf)
+        if text:
+            self._parse_exprs(text, buf)
+
+    s = r'(?:\{[^{}]*?\}[^{}]*?)*'
+    EXPR_REXP = re.compile(r'\{=(.*?)=\}|([$#])\{(.*?' + s + r')\}', re.S)
+    del s
+
+    def _get_expr(self, m):
+        code1, ch, code2 = m.groups()
+        if ch:
+            code = code2
+            escape_p = ch == '$'
+        elif code1[0] == code1[-1] == '=':
+            code = code1[1:-1]
+            escape_p = False
+        else:
+            code = code1
+            escape_p = True
+        return code, escape_p
+
+    def _scan_exprs(self, input):
+        rexp = self.EXPR_REXP
+        pos = 0
+        for m in rexp.finditer(input):
+            text = input[pos:m.start()]
+            pos = m.end()
+            code, escape_p = self._get_expr(m)
+            yield text, code, escape_p, False
+        rest = input[pos:]
+        yield rest, None, None, True
+
+    def _parse_exprs(self, input, buf):
+        if not input: return
+        buf.append("_buf+=")
+        extend = buf.extend
+        op = ''
+        for text, code, escape_p, end_p in self._scan_exprs(input):
+            if end_p:
+                break
+            if text:
+                extend((op, self._escape_text(text)))
+                op = '+'
+            if code:
+                extend((op, escape_p and '_E(' or '_S(', code, ')'))
+                op = '+'
+        rest = text
+        if rest:
+            extend((op, self._escape_text(rest)))
+        if input.endswith("\n"):
+            buf.append(";\n")
+        else:
+            buf.append(";")
+
+    def _escape_text(self, text):
+        lines = text.splitlines(True)
+        fn = self._escape_str
+        s = "\\\n".join( fn(line) for line in lines )
+        return "".join(("'", s, "'"))
+
+    def _escape_str(self, string):
+        return string.replace("\\", "\\\\").replace("'", "\\'").replace("\n", r"\n")
+
+
+def _linenum(input, pos):
+    return input[0:pos].count("\n") + 1
+
+
+JS_FUNC = r"""
+function _S(x){return x==null?'':x;}
+function _E(x){return x==null?'':typeof(x)!=='string'?x:x.replace(/[&<>"']/g,_EF);}
+var _ET={'&':"&amp;",'<':"&lt;",'>':"&gt;",'"':"&quot;","'":"&#039;"};
+function _EF(c){return _ET[c];};
+"""[1:-1]
+JS_FUNC = escaped.EscapedStr(JS_FUNC)
+
+
+
+##
+## cache storages
+##
+
+class CacheStorage(object):
+    """[abstract] Template object cache class (in memory and/or file)"""
+
+    def __init__(self):
+        self.items = {}    # key: full path, value: template object
+
+    def get(self, cachepath, create_template):
+        """get template object. if not found, load attributes from cache file and restore  template object."""
+        template = self.items.get(cachepath)
+        if not template:
+            dct = self._load(cachepath)
+            if dct:
+                template = create_template()
+                for k in dct:
+                    setattr(template, k, dct[k])
+                self.items[cachepath] = template
+        return template
+
+    def set(self, cachepath, template):
+        """set template object and save template attributes into cache file."""
+        self.items[cachepath] = template
+        dct = self._save_data_of(template)
+        return self._store(cachepath, dct)
+
+    def _save_data_of(self, template):
+        return { 'args'  : template.args,   'bytecode' : template.bytecode,
+                 'script': template.script, 'timestamp': template.timestamp }
+
+    def unset(self, cachepath):
+        """remove template object from dict and cache file."""
+        self.items.pop(cachepath, None)
+        return self._delete(cachepath)
+
+    def clear(self):
+        """remove all template objects and attributes from dict and cache file."""
+        d, self.items = self.items, {}
+        for k in d.iterkeys():
+            self._delete(k)
+        d.clear()
+
+    def _load(self, cachepath):
+        """(abstract) load dict object which represents template object attributes from cache file."""
+        raise NotImplementedError.new("%s#_load(): not implemented yet." % self.__class__.__name__)
+
+    def _store(self, cachepath, template):
+        """(abstract) load dict object which represents template object attributes from cache file."""
+        raise NotImplementedError.new("%s#_store(): not implemented yet." % self.__class__.__name__)
+
+    def _delete(self, cachepath):
+        """(abstract) remove template object from cache file."""
+        raise NotImplementedError.new("%s#_delete(): not implemented yet." % self.__class__.__name__)
+
+
+class MemoryCacheStorage(CacheStorage):
+
+    def _load(self, cachepath):
+        return None
+
+    def _store(self, cachepath, template):
+        pass
+
+    def _delete(self, cachepath):
+        pass
+
+
+class FileCacheStorage(CacheStorage):
+
+    def _load(self, cachepath):
+        if not _isfile(cachepath): return None
+        if logger: logger.info("[tenjin.%s] load cache (file=%r)" % (self.__class__.__name__, cachepath))
+        data = _read_binary_file(cachepath)
+        return self._restore(data)
+
+    def _store(self, cachepath, dct):
+        if logger: logger.info("[tenjin.%s] store cache (file=%r)" % (self.__class__.__name__, cachepath))
+        data = self._dump(dct)
+        _write_binary_file(cachepath, data)
+
+    def _restore(self, data):
+        raise NotImplementedError("%s._restore(): not implemented yet." % self.__class__.__name__)
+
+    def _dump(self, dct):
+        raise NotImplementedError("%s._dump(): not implemented yet." % self.__class__.__name__)
+
+    def _delete(self, cachepath):
+        _ignore_not_found_error(lambda: os.unlink(cachepath))
+
+
+class MarshalCacheStorage(FileCacheStorage):
+
+    def _restore(self, data):
+        return marshal.loads(data)
+
+    def _dump(self, dct):
+        return marshal.dumps(dct)
+
+
+class PickleCacheStorage(FileCacheStorage):
+
+    def __init__(self, *args, **kwargs):
+        global pickle
+        if pickle is None:
+            import cPickle as pickle
+        FileCacheStorage.__init__(self, *args, **kwargs)
+
+    def _restore(self, data):
+        return pickle.loads(data)
+
+    def _dump(self, dct):
+        dct.pop('bytecode', None)
+        return pickle.dumps(dct)
+
+
+class TextCacheStorage(FileCacheStorage):
+
+    def _restore(self, data):
+        header, script = data.split("\n\n", 1)
+        timestamp = encoding = args = None
+        for line in header.split("\n"):
+            key, val = line.split(": ", 1)
+            if   key == 'timestamp':  timestamp = float(val)
+            elif key == 'encoding':   encoding  = val
+            elif key == 'args':       args      = val.split(', ')
+        if encoding: script = script.decode(encoding)   ## binary(=str) to unicode
+        return {'args': args, 'script': script, 'timestamp': timestamp}
+
+    def _dump(self, dct):
+        s = dct['script']
+        if dct.get('encoding') and isinstance(s, unicode):
+            s = s.encode(dct['encoding'])           ## unicode to binary(=str)
+        sb = []
+        sb.append("timestamp: %s\n" % dct['timestamp'])
+        if dct.get('encoding'):
+            sb.append("encoding: %s\n" % dct['encoding'])
+        if dct.get('args') is not None:
+            sb.append("args: %s\n" % ', '.join(dct['args']))
+        sb.append("\n")
+        sb.append(s)
+        s = ''.join(sb)
+        if python3:
+            if isinstance(s, str):
+                s = s.encode(dct.get('encoding') or 'utf-8')   ## unicode(=str) to binary
+        return s
+
+    def _save_data_of(self, template):
+        dct = FileCacheStorage._save_data_of(self, template)
+        dct['encoding'] = template.encoding
+        return dct
+
+
+
+##
+## abstract class for data cache
+##
+class KeyValueStore(object):
+
+    def get(self, key, *options):
+        raise NotImplementedError("%s.get(): not implemented yet." % self.__class__.__name__)
+
+    def set(self, key, value, *options):
+        raise NotImplementedError("%s.set(): not implemented yet." % self.__class__.__name__)
+
+    def delete(self, key, *options):
+        raise NotImplementedError("%s.del(): not implemented yet." % self.__class__.__name__)
+
+    def has(self, key, *options):
+        raise NotImplementedError("%s.has(): not implemented yet." % self.__class__.__name__)
+
+
+##
+## memory base data cache
+##
+class MemoryBaseStore(KeyValueStore):
+
+    def __init__(self):
+        self.values = {}
+
+    def get(self, key, original_timestamp=None):
+        tupl = self.values.get(key)
+        if not tupl:
+            return None
+        value, created_at, expires_at = tupl
+        if original_timestamp is not None and created_at < original_timestamp:
+            self.delete(key)
+            return None
+        if expires_at < _time():
+            self.delete(key)
+            return None
+        return value
+
+    def set(self, key, value, lifetime=0):
+        created_at = _time()
+        expires_at = lifetime and created_at + lifetime or 0
+        self.values[key] = (value, created_at, expires_at)
+        return True
+
+    def delete(self, key):
+        try:
+            del self.values[key]
+            return True
+        except KeyError:
+            return False
+
+    def has(self, key):
+        pair = self.values.get(key)
+        if not pair:
+            return False
+        value, created_at, expires_at = pair
+        if expires_at and expires_at < _time():
+            self.delete(key)
+            return False
+        return True
+
+
+##
+## file base data cache
+##
+class FileBaseStore(KeyValueStore):
+
+    lifetime = 604800   # = 60*60*24*7
+
+    def __init__(self, root_path, encoding=None):
+        if not os.path.isdir(root_path):
+            raise ValueError("%r: directory not found." % (root_path, ))
+        self.root_path = root_path
+        if encoding is None and python3:
+            encoding = 'utf-8'
+        self.encoding = encoding
+
+    _pat = re.compile(r'[^-.\/\w]')
+
+    def filepath(self, key, _pat1=_pat):
+        return os.path.join(self.root_path, _pat1.sub('_', key))
+
+    def get(self, key, original_timestamp=None):
+        fpath = self.filepath(key)
+        #if not _isfile(fpath): return None
+        stat = _ignore_not_found_error(lambda: os.stat(fpath), None)
+        if stat is None:
+            return None
+        created_at = stat.st_ctime
+        expires_at = stat.st_mtime
+        if original_timestamp is not None and created_at < original_timestamp:
+            self.delete(key)
+            return None
+        if expires_at < _time():
+            self.delete(key)
+            return None
+        if self.encoding:
+            f = lambda: _read_text_file(fpath, self.encoding)
+        else:
+            f = lambda: _read_binary_file(fpath)
+        return _ignore_not_found_error(f, None)
+
+    def set(self, key, value, lifetime=0):
+        fpath = self.filepath(key)
+        dirname = os.path.dirname(fpath)
+        if not os.path.isdir(dirname):
+            os.makedirs(dirname)
+        now = _time()
+        if isinstance(value, _unicode):
+            value = value.encode(self.encoding or 'utf-8')
+        _write_binary_file(fpath, value)
+        expires_at = now + (lifetime or self.lifetime)  # timestamp
+        os.utime(fpath, (expires_at, expires_at))
+        return True
+
+    def delete(self, key):
+        fpath = self.filepath(key)
+        ret = _ignore_not_found_error(lambda: os.unlink(fpath), False)
+        return ret != False
+
+    def has(self, key):
+        fpath = self.filepath(key)
+        if not _isfile(fpath):
+            return False
+        if _getmtime(fpath) < _time():
+            self.delete(key)
+            return False
+        return True
+
+
+
+##
+## html fragment cache helper class
+##
+class FragmentCacheHelper(object):
+    """html fragment cache helper class."""
+
+    lifetime = 60   # 1 minute
+    prefix   = None
+
+    def __init__(self, store, lifetime=None, prefix=None):
+        self.store = store
+        if lifetime is not None:  self.lifetime = lifetime
+        if prefix   is not None:  self.prefix   = prefix
+
+    def not_cached(self, cache_key, lifetime=None):
+        """(obsolete. use cache_as() instead of this.)
+           html fragment cache helper. see document of FragmentCacheHelper class."""
+        context = sys._getframe(1).f_locals['_context']
+        context['_cache_key'] = cache_key
+        key = self.prefix and self.prefix + cache_key or cache_key
+        value = self.store.get(key)
+        if value:    ## cached
+            if logger: logger.debug('[tenjin.not_cached] %r: cached.' % (cache_key, ))
+            context[key] = value
+            return False
+        else:        ## not cached
+            if logger: logger.debug('[tenjin.not_cached]: %r: not cached.' % (cache_key, ))
+            if key in context: del context[key]
+            if lifetime is None:  lifetime = self.lifetime
+            context['_cache_lifetime'] = lifetime
+            helpers.start_capture(cache_key, _depth=2)
+            return True
+
+    def echo_cached(self):
+        """(obsolete. use cache_as() instead of this.)
+           html fragment cache helper. see document of FragmentCacheHelper class."""
+        f_locals = sys._getframe(1).f_locals
+        context = f_locals['_context']
+        cache_key = context.pop('_cache_key')
+        key = self.prefix and self.prefix + cache_key or cache_key
+        if key in context:    ## cached
+            value = context.pop(key)
+        else:                 ## not cached
+            value = helpers.stop_capture(False, _depth=2)
+            lifetime = context.pop('_cache_lifetime')
+            self.store.set(key, value, lifetime)
+        f_locals['_buf'].append(value)
+
+    def functions(self):
+        """(obsolete. use cache_as() instead of this.)"""
+        return (self.not_cached, self.echo_cached)
+
+    def cache_as(self, cache_key, lifetime=None):
+        key = self.prefix and self.prefix + cache_key or cache_key
+        _buf = sys._getframe(1).f_locals['_buf']
+        value = self.store.get(key)
+        if value:
+            if logger: logger.debug('[tenjin.cache_as] %r: cache found.' % (cache_key, ))
+            _buf.append(value)
+        else:
+            if logger: logger.debug('[tenjin.cache_as] %r: expired or not cached yet.' % (cache_key, ))
+            _buf_len = len(_buf)
+            yield None
+            value = ''.join(_buf[_buf_len:])
+            self.store.set(key, value, lifetime)
+
+## you can change default store by 'tenjin.helpers.fragment_cache.store = ...'
+helpers.fragment_cache = FragmentCacheHelper(MemoryBaseStore())
+helpers.not_cached  = helpers.fragment_cache.not_cached
+helpers.echo_cached = helpers.fragment_cache.echo_cached
+helpers.cache_as    = helpers.fragment_cache.cache_as
+helpers.__all__.extend(('not_cached', 'echo_cached', 'cache_as'))
+
+
+
+##
+## helper class to find and read template
+##
+class Loader(object):
+
+    def exists(self, filepath):
+        raise NotImplementedError("%s.exists(): not implemented yet." % self.__class__.__name__)
+
+    def find(self, filename, dirs=None):
+        #: if dirs provided then search template file from it.
+        if dirs:
+            for dirname in dirs:
+                filepath = os.path.join(dirname, filename)
+                if self.exists(filepath):
+                    return filepath
+        #: if dirs not provided then just return filename if file exists.
+        else:
+            if self.exists(filename):
+                return filename
+        #: if file not found then return None.
+        return None
+
+    def abspath(self, filename):
+        raise NotImplementedError("%s.abspath(): not implemented yet." % self.__class__.__name__)
+
+    def timestamp(self, filepath):
+        raise NotImplementedError("%s.timestamp(): not implemented yet." % self.__class__.__name__)
+
+    def load(self, filepath):
+        raise NotImplementedError("%s.timestamp(): not implemented yet." % self.__class__.__name__)
+
+
+
+##
+## helper class to find and read files
+##
+class FileSystemLoader(Loader):
+
+    def exists(self, filepath):
+        #: return True if filepath exists as a file.
+        return os.path.isfile(filepath)
+
+    def abspath(self, filepath):
+        #: return full-path of filepath
+        return os.path.abspath(filepath)
+
+    def timestamp(self, filepath):
+        #: return mtime of file
+        return _getmtime(filepath)
+
+    def load(self, filepath):
+        #: if file exists, return file content and mtime
+        def f():
+            mtime = _getmtime(filepath)
+            input = _read_template_file(filepath)
+            mtime2 = _getmtime(filepath)
+            if mtime != mtime2:
+                mtime = mtime2
+                input = _read_template_file(filepath)
+                mtime2 = _getmtime(filepath)
+                if mtime != mtime2:
+                    if logger:
+                        logger.warn("[tenjin] %s.load(): timestamp is changed while reading file." % self.__class__.__name__)
+            return input, mtime
+        #: if file not exist, return None
+        return _ignore_not_found_error(f)
+
+
+##
+##
+##
+class TemplateNotFoundError(Exception):
+    pass
+
+
+
+##
+## template engine class
+##
+
+class Engine(object):
+    """Template Engine class.
+       See User's Guide and examples for details.
+       http://www.kuwata-lab.com/tenjin/pytenjin-users-guide.html
+       http://www.kuwata-lab.com/tenjin/pytenjin-examples.html
+    """
+
+    ## default value of attributes
+    prefix     = ''
+    postfix    = ''
+    layout     = None
+    templateclass = Template
+    path       = None
+    cache      = TextCacheStorage()  # save converted Python code into text file
+    lang       = None
+    loader     = FileSystemLoader()
+    preprocess = False
+    preprocessorclass = Preprocessor
+    timestamp_interval = 1  # seconds
+
+    def __init__(self, prefix=None, postfix=None, layout=None, path=None, cache=True, preprocess=None, templateclass=None, preprocessorclass=None, lang=None, loader=None, pp=None, **kwargs):
+        """Initializer of Engine class.
+
+           prefix:str (='')
+             Prefix string used to convert template short name to template filename.
+           postfix:str (='')
+             Postfix string used to convert template short name to template filename.
+           layout:str (=None)
+             Default layout template name.
+           path:list of str(=None)
+             List of directory names which contain template files.
+           cache:bool or CacheStorage instance (=True)
+             Cache storage object to store converted python code.
+             If True, default cache storage (=Engine.cache) is used (if it is None
+             then create MarshalCacheStorage object for each engine object).
+             If False, no cache storage is used nor no cache files are created.
+           preprocess:bool(=False)
+             Activate preprocessing or not.
+           templateclass:class (=Template)
+             Template class which engine creates automatically.
+           lang:str (=None)
+             Language name such as 'en', 'fr', 'ja', and so on. If you specify
+             this, cache file path will be 'inex.html.en.cache' for example.
+           pp:list (=None)
+             List of preprocessor object which is callable and manipulates template content.
+           kwargs:dict
+             Options for Template class constructor.
+             See document of Template.__init__() for details.
+        """
+        if prefix:  self.prefix  = prefix
+        if postfix: self.postfix = postfix
+        if layout:  self.layout  = layout
+        if templateclass: self.templateclass = templateclass
+        if preprocessorclass: self.preprocessorclass = preprocessorclass
+        if path is not None:  self.path = path
+        if lang is not None:  self.lang = lang
+        if loader is not None: self.loader = loader
+        if preprocess is not None: self.preprocess = preprocess
+        if   pp is None:            pp = []
+        elif isinstance(pp, list):  pass
+        elif isinstance(pp, tuple): pp = list(pp)
+        else:
+            raise TypeError("'pp' expected to be a list but got %r." % (pp,))
+        self.pp = pp
+        if preprocess:
+            self.pp.append(TemplatePreprocessor(self.preprocessorclass))
+        self.kwargs = kwargs
+        self.encoding = kwargs.get('encoding')
+        self._filepaths = {}   # template_name => relative path and absolute path
+        self._added_templates = {}   # templates added by add_template()
+        #self.cache = cache
+        self._set_cache_storage(cache)
+
+    def _set_cache_storage(self, cache):
+        if cache is True:
+            if not self.cache:
+                self.cache = MarshalCacheStorage()
+        elif cache is None:
+            pass
+        elif cache is False:
+            self.cache = None
+        elif isinstance(cache, CacheStorage):
+            self.cache = cache
+        else:
+            raise ValueError("%r: invalid cache object." % (cache, ))
+
+    def cachename(self, filepath):
+        #: if lang is provided then add it to cache filename.
+        if self.lang:
+            return '%s.%s.cache' % (filepath, self.lang)
+        #: return cache file name.
+        else:
+            return filepath + '.cache'
+
+    def to_filename(self, template_name):
+        """Convert template short name into filename.
+           ex.
+             >>> engine = tenjin.Engine(prefix='user_', postfix='.pyhtml')
+             >>> engine.to_filename(':list')
+             'user_list.pyhtml'
+             >>> engine.to_filename('list')
+             'list'
+        """
+        #: if template_name starts with ':', add prefix and postfix to it.
+        if template_name[0] == ':' :
+            return self.prefix + template_name[1:] + self.postfix
+        #: if template_name doesn't start with ':', just return it.
+        return template_name
+
+    def _create_template(self, input=None, filepath=None, _context=None, _globals=None):
+        #: if input is not specified then just create empty template object.
+        template = self.templateclass(None, **self.kwargs)
+        #: if input is specified then create template object and return it.
+        if input:
+            template.convert(input, filepath)
+        return template
+
+    def _preprocess(self, input, filepath, _context, _globals):
+        #if _context is None: _context = {}
+        #if _globals is None: _globals = sys._getframe(3).f_globals
+        #: preprocess template and return result
+        #preprocessor = self.preprocessorclass(filepath, input=input)
+        #return preprocessor.render(_context, globals=_globals)
+        #: preprocesses input with _context and returns result.
+        if '_engine' not in _context:
+            self.hook_context(_context)
+        for pp in self.pp:
+            input = pp.__call__(input, filename=filepath, context=_context, globals=_globals)
+        return input
+
+    def add_template(self, template):
+        self._added_templates[template.filename] = template
+
+    def _get_template_from_cache(self, cachepath, filepath):
+        #: if template not found in cache, return None
+        template = self.cache.get(cachepath, self.templateclass)
+        if not template:
+            return None
+        assert template.timestamp is not None
+        #: if checked within a sec, skip timestamp check.
+        now = _time()
+        last_checked = getattr(template, '_last_checked_at', None)
+        if last_checked and now < last_checked + self.timestamp_interval:
+            #if logger: logger.trace('[tenjin.%s] timestamp check skipped (%f < %f + %f)' % \
+            #                        (self.__class__.__name__, now, template._last_checked_at, self.timestamp_interval))
+            return template
+        #: if timestamp of template objectis same as file, return it.
+        if template.timestamp == self.loader.timestamp(filepath):
+            template._last_checked_at = now
+            return template
+        #: if timestamp of template object is different from file, clear it
+        #cache._delete(cachepath)
+        if logger: logger.info("[tenjin.%s] cache expired (filepath=%r)" % \
+                                   (self.__class__.__name__, filepath))
+        return None
+
+    def get_template(self, template_name, _context=None, _globals=None):
+        """Return template object.
+           If template object has not registered, template engine creates
+           and registers template object automatically.
+        """
+        #: accept template_name such as ':index'.
+        filename = self.to_filename(template_name)
+        #: if template object is added by add_template(), return it.
+        if filename in self._added_templates:
+            return self._added_templates[filename]
+        #: get filepath and fullpath of template
+        pair = self._filepaths.get(filename)
+        if pair:
+            filepath, fullpath = pair
+        else:
+            #: if template file is not found then raise TemplateNotFoundError.
+            filepath = self.loader.find(filename, self.path)
+            if not filepath:
+                raise TemplateNotFoundError('%s: filename not found (path=%r).' % (filename, self.path))
+            #
+            fullpath = self.loader.abspath(filepath)
+            self._filepaths[filename] = (filepath, fullpath)
+        #: use full path as base of cache file path
+        cachepath = self.cachename(fullpath)
+        #: get template object from cache
+        cache = self.cache
+        template = cache and self._get_template_from_cache(cachepath, filepath) or None
+        #: if template object is not found in cache or is expired...
+        if not template:
+            ret = self.loader.load(filepath)
+            if not ret:
+                raise TemplateNotFoundError("%r: template not found." % filepath)
+            input, timestamp = ret
+            if self.pp:   ## required for preprocessing
+                if _context is None: _context = {}
+                if _globals is None: _globals = sys._getframe(1).f_globals
+                input = self._preprocess(input, filepath, _context, _globals)
+            #: create template object.
+            template = self._create_template(input, filepath, _context, _globals)
+            #: set timestamp and filename of template object.
+            template.timestamp = timestamp
+            template._last_checked_at = _time()
+            #: save template object into cache.
+            if cache:
+                if not template.bytecode:
+                    #: ignores syntax error when compiling.
+                    try: template.compile()
+                    except SyntaxError: pass
+                cache.set(cachepath, template)
+        #else:
+        #    template.compile()
+        #:
+        template.filename = filepath
+        return template
+
+    def include(self, template_name, append_to_buf=True, **kwargs):
+        """Evaluate template using current local variables as context.
+
+           template_name:str
+             Filename (ex. 'user_list.pyhtml') or short name (ex. ':list') of template.
+           append_to_buf:boolean (=True)
+             If True then append output into _buf and return None,
+             else return stirng output.
+
+           ex.
+             <?py include('file.pyhtml') ?>
+             #{include('file.pyhtml', False)}
+             <?py val = include('file.pyhtml', False) ?>
+        """
+        #: get local and global vars of caller.
+        frame = sys._getframe(1)
+        locals  = frame.f_locals
+        globals = frame.f_globals
+        #: get _context from caller's local vars.
+        assert '_context' in locals
+        context = locals['_context']
+        #: if kwargs specified then add them into context.
+        if kwargs:
+            context.update(kwargs)
+        #: get template object with context data and global vars.
+        ## (context and globals are passed to get_template() only for preprocessing.)
+        template = self.get_template(template_name, context, globals)
+        #: if append_to_buf is true then add output to _buf.
+        #: if append_to_buf is false then don't add output to _buf.
+        if append_to_buf:  _buf = locals['_buf']
+        else:              _buf = None
+        #: render template and return output.
+        s = template.render(context, globals, _buf=_buf)
+        #: kwargs are removed from context data.
+        if kwargs:
+            for k in kwargs:
+                del context[k]
+        return s
+
+    def render(self, template_name, context=None, globals=None, layout=True):
+        """Evaluate template with layout file and return result of evaluation.
+
+           template_name:str
+             Filename (ex. 'user_list.pyhtml') or short name (ex. ':list') of template.
+           context:dict (=None)
+             Context object to evaluate. If None then new dict is used.
+           globals:dict (=None)
+             Global context to evaluate. If None then globals() is used.
+           layout:str or Bool(=True)
+             If True, the default layout name specified in constructor is used.
+             If False, no layout template is used.
+             If str, it is regarded as layout template name.
+
+           If temlate object related with the 'template_name' argument is not exist,
+           engine generates a template object and register it automatically.
+        """
+        if context is None:
+            context = {}
+        if globals is None:
+            globals = sys._getframe(1).f_globals
+        self.hook_context(context)
+        while True:
+            ## context and globals are passed to get_template() only for preprocessing
+            template = self.get_template(template_name, context, globals)
+            content  = template.render(context, globals)
+            layout   = context.pop('_layout', layout)
+            if layout is True or layout is None:
+                layout = self.layout
+            if not layout:
+                break
+            template_name = layout
+            layout = False
+            context['_content'] = content
+        context.pop('_content', None)
+        return content
+
+    def hook_context(self, context):
+        #: add engine itself into context data.
+        context['_engine'] = self
+        #context['render'] = self.render
+        #: add include() method into context data.
+        context['include'] = self.include
+
+
+##
+## safe template and engine
+##
+
+class SafeTemplate(Template):
+    """Uses 'to_escaped()' instead of 'escape()'.
+       '#{...}' is not allowed with this class. Use '[==...==]' instead.
+    """
+
+    tostrfunc  = 'to_str'
+    escapefunc = 'to_escaped'
+
+    def get_expr_and_flags(self, match):
+        return _get_expr_and_flags(match, "#{%s}: '#{}' is not allowed with SafeTemplate.")
+
+
+class SafePreprocessor(Preprocessor):
+
+    tostrfunc  = 'to_str'
+    escapefunc = 'to_escaped'
+
+    def get_expr_and_flags(self, match):
+        return _get_expr_and_flags(match, "#{{%s}}: '#{{}}' is not allowed with SafePreprocessor.")
+
+
+def _get_expr_and_flags(match, errmsg):
+    expr1, expr2, expr3, expr4 = match.groups()
+    if expr1 is not None:
+        raise TemplateSyntaxError(errmsg % match.group(1))
+    if expr2 is not None: return expr2, (True, False)   # #{...}    : call escape, not to_str
+    if expr3 is not None: return expr3, (False, True)   # [==...==] : not escape, call to_str
+    if expr4 is not None: return expr4, (True, False)   # [=...=]   : call escape, not to_str
+
+
+class SafeEngine(Engine):
+
+    templateclass     = SafeTemplate
+    preprocessorclass = SafePreprocessor
+
+
+##
+## for Google App Engine
+## (should separate into individual file or module?)
+##
+
+def _dummy():
+    global memcache, _tenjin
+    memcache = _tenjin = None      # lazy import of google.appengine.api.memcache
+    global GaeMemcacheCacheStorage, GaeMemcacheStore, init
+
+    class GaeMemcacheCacheStorage(CacheStorage):
+
+        lifetime = 0     # 0 means unlimited
+
+        def __init__(self, lifetime=None, namespace=None):
+            CacheStorage.__init__(self)
+            if lifetime is not None:  self.lifetime = lifetime
+            self.namespace = namespace
+
+        def _load(self, cachepath):
+            key = cachepath
+            if _tenjin.logger: _tenjin.logger.info("[tenjin.gae.GaeMemcacheCacheStorage] load cache (key=%r)" % (key, ))
+            return memcache.get(key, namespace=self.namespace)
+
+        def _store(self, cachepath, dct):
+            dct.pop('bytecode', None)
+            key = cachepath
+            if _tenjin.logger: _tenjin.logger.info("[tenjin.gae.GaeMemcacheCacheStorage] store cache (key=%r)" % (key, ))
+            ret = memcache.set(key, dct, self.lifetime, namespace=self.namespace)
+            if not ret:
+                if _tenjin.logger: _tenjin.logger.info("[tenjin.gae.GaeMemcacheCacheStorage] failed to store cache (key=%r)" % (key, ))
+
+        def _delete(self, cachepath):
+            key = cachepath
+            memcache.delete(key, namespace=self.namespace)
+
+
+    class GaeMemcacheStore(KeyValueStore):
+
+        lifetime = 0
+
+        def __init__(self, lifetime=None, namespace=None):
+            if lifetime is not None:  self.lifetime = lifetime
+            self.namespace = namespace
+
+        def get(self, key):
+            return memcache.get(key, namespace=self.namespace)
+
+        def set(self, key, value, lifetime=None):
+            if lifetime is None:  lifetime = self.lifetime
+            if memcache.set(key, value, lifetime, namespace=self.namespace):
+                return True
+            else:
+                if _tenjin.logger: _tenjin.logger.info("[tenjin.gae.GaeMemcacheStore] failed to set (key=%r)" % (key, ))
+                return False
+
+        def delete(self, key):
+            return memcache.delete(key, namespace=self.namespace)
+
+        def has(self, key):
+            if memcache.add(key, 'dummy', namespace=self.namespace):
+                memcache.delete(key, namespace=self.namespace)
+                return False
+            else:
+                return True
+
+
+    def init():
+        global memcache, _tenjin
+        if not memcache:
+            from google.appengine.api import memcache
+        if not _tenjin: import tenjin as _tenjin
+        ## avoid cache confliction between versions
+        ver = os.environ.get('CURRENT_VERSION_ID', '1.1')#.split('.')[0]
+        Engine.cache = GaeMemcacheCacheStorage(namespace=ver)
+        ## set fragment cache store
+        helpers.fragment_cache.store    = GaeMemcacheStore(namespace=ver)
+        helpers.fragment_cache.lifetime = 60    #  1 minute
+        helpers.fragment_cache.prefix   = 'fragment.'
+
+
+gae = create_module('tenjin.gae', _dummy,
+                    os=os, helpers=helpers, Engine=Engine,
+                    CacheStorage=CacheStorage, KeyValueStore=KeyValueStore)
+
+
+del _dummy
diff --git a/utest/c_utils_test.py b/utest/c_utils_test.py
new file mode 100755
index 0000000..5b28e57
--- /dev/null
+++ b/utest/c_utils_test.py
@@ -0,0 +1,66 @@
+#!/usr/bin/python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+#
+# Some test code for c_utils.py
+#
+
+import sys
+sys.path.append("../loxi_front_end")
+sys.path.append("..")
+
+from c_parse_utils import *
+
+for filename in [
+    '../canonical/openflow.h-1.0',
+    '../canonical/openflow.h-1.1',
+    '../canonical/openflow.h-1.2']:
+
+    f = open(filename, 'r')
+    all_lines = f.readlines()
+    contents = " ".join(all_lines)
+
+    print "clean_up"
+    print clean_up_input(contents)
+
+#print "structs"
+#for x in extract_structs(c):
+#    print x
+
+    all_enums = extract_enums(contents)
+    print "Got %d enums for %s" % (len(all_enums), filename)
+    for x in all_enums:
+        name, entries =  extract_enum_vals(x)
+        print "Enum name %s has %d entries" % (name, len(entries))
+        for item in entries:
+            print "  key=%s, value=%s" % (item[0], str(item[1]))
+
+    all_defs = extract_defines(contents)
+    print "Got %d defines for %s" % (len(all_defs), filename)
+    for x in all_defs:
+        print "  name=%s, value=%s" % (x[0], str(x[1]))
diff --git a/utest/identifiers_test.py b/utest/identifiers_test.py
new file mode 100755
index 0000000..e8ff2cd
--- /dev/null
+++ b/utest/identifiers_test.py
@@ -0,0 +1,78 @@
+#!/usr/bin/python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+#
+# Some test code for identifiers.py
+#
+
+import sys
+sys.path.append("..")
+sys.path.append("../loxi_front_end")
+sys.path.append("../loxi_utils")
+
+from identifiers import *
+
+test_dict = {}
+group_dict = {}
+
+for ver, filename in [
+    (1, '../canonical/openflow.h-1.0'),
+    (2, '../canonical/openflow.h-1.1'),
+    (3, '../canonical/openflow.h-1.2'),
+    (4, '../canonical/openflow.h-1.3')]:
+
+    f = open(filename, 'r')
+    all_lines = f.readlines()
+    contents = " ".join(all_lines)
+
+    add_identifiers(test_dict, group_dict, ver, contents)
+
+version_list = [1,2,3,4]
+print "Merged %d entries from files" % len(test_dict)
+
+for ident, info in test_dict.items():
+    print """
+Name %s:
+  common %s
+  num vals %d
+  all agree %s
+  defined agree %s
+  ofp name %s
+  group %s""" % (ident, str(info["common_value"]), 
+                  len(info["values_by_version"]),
+                  all_versions_agree(test_dict, version_list, ident),
+                  defined_versions_agree(test_dict, version_list, ident),
+                  info["ofp_name"], info["ofp_group"])
+
+    for version, value in info["values_by_version"].items():
+        print "  version %d value %s" % (version, value)
+
+for ident, loxi_list in group_dict.items():
+    print "Group %s:" % ident
+    for loxi_name in loxi_list:
+        print "   %s" % loxi_name
diff --git a/utest/of_h_utils_test.py b/utest/of_h_utils_test.py
new file mode 100755
index 0000000..0a4a6f9
--- /dev/null
+++ b/utest/of_h_utils_test.py
@@ -0,0 +1,54 @@
+#!/usr/bin/python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+#
+# Some test code for of_h_utils.py
+#
+
+import sys
+sys.path.append("..")
+sys.path.append("../loxi_front_end")
+sys.path.append("../loxi_utils")
+
+from of_h_utils import *
+
+for filename, version in [
+    ('../canonical/openflow.h-1.0', 1),
+    ('../canonical/openflow.h-1.1', 2),
+    ('../canonical/openflow.h-1.2', 3),
+    ('../canonical/openflow.h-1.3', 4)]:
+
+    f = open(filename, 'r')
+    all_lines = f.readlines()
+    contents = " ".join(all_lines)
+
+    enum_dict = get_enum_dict(version, contents)
+    print "Got %d LOXI entries %s" % (len(enum_dict), filename)
+    for name, entry in enum_dict.items():
+        print "Enum %s:\n  ofp_name: %s.\n  ofp_group: %s.\n  value: %s" % (
+            name, entry.ofp_name, entry.ofp_group, str(entry.value))
diff --git a/utest/test_parser.py b/utest/test_parser.py
new file mode 100755
index 0000000..75e0c5d
--- /dev/null
+++ b/utest/test_parser.py
@@ -0,0 +1,161 @@
+#!/usr/bin/env python
+# Copyright 2013, Big Switch Networks, Inc.
+#
+# LoxiGen is licensed under the Eclipse Public License, version 1.0 (EPL), with
+# the following special exception:
+#
+# LOXI Exception
+#
+# As a special exception to the terms of the EPL, you may distribute libraries
+# generated by LoxiGen (LoxiGen Libraries) under the terms of your choice, provided
+# that copyright and licensing notices generated by LoxiGen are not altered or removed
+# from the LoxiGen Libraries and the notice provided below is (i) included in
+# the LoxiGen Libraries, if distributed in source code form and (ii) included in any
+# documentation for the LoxiGen Libraries, if distributed in binary form.
+#
+# Notice: "Copyright 2013, Big Switch Networks, Inc. This library was generated by the LoxiGen Compiler."
+#
+# You may not use this file except in compliance with the EPL or LOXI Exception. You may obtain
+# a copy of the EPL at:
+#
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# EPL for the specific language governing permissions and limitations
+# under the EPL.
+
+import unittest
+import pyparsing
+import loxi_front_end.parser as parser
+
+class StructTests(unittest.TestCase):
+    def test_empty(self):
+        src = """\
+struct foo { };
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(), [['struct', 'foo', []]])
+
+    def test_one_field(self):
+        src = """\
+struct foo {
+    uint32_t bar;
+};
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo', [['uint32_t', 'bar']]]])
+
+    def test_multiple_fields(self):
+        src = """\
+struct foo {
+    uint32_t bar;
+    uint8_t baz;
+    uint64_t abc;
+};
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo',
+                [['uint32_t', 'bar'],
+                 ['uint8_t', 'baz'],
+                 ['uint64_t', 'abc']]]])
+
+    def test_array_type(self):
+        src = """\
+struct foo {
+    uint32_t[4] bar;
+};
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo', [['uint32_t[4]', 'bar']]]])
+
+    def test_list_type(self):
+        src = """\
+struct foo {
+    list(of_action_t) bar;
+};
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo', [['list(of_action_t)', 'bar']]]])
+
+class TestMetadata(unittest.TestCase):
+    def test_version(self):
+        src = """\
+#version 1
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(), [['metadata', 'version', '1']])
+
+class TestToplevel(unittest.TestCase):
+    def test_multiple_structs(self):
+        src = """\
+struct foo { };
+struct bar { };
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo', []], ['struct', 'bar', []]])
+
+    def test_comments(self):
+        src = """\
+// comment 1
+struct foo { //comment 2
+// comment 3
+   uint32_t a; //comment 5 
+// comment 6
+};
+// comment 4
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['struct', 'foo', [['uint32_t', 'a']]]])
+
+    def test_mixed(self):
+        src = """\
+#version 1
+struct foo { };
+#version 2
+struct bar { };
+"""
+        ast = parser.parse(src)
+        self.assertEquals(ast.asList(),
+            [['metadata', 'version', '1'],
+             ['struct', 'foo', []],
+             ['metadata', 'version', '2'],
+             ['struct', 'bar', []]])
+
+class TestErrors(unittest.TestCase):
+    def syntax_error(self, src, regex):
+        with self.assertRaisesRegexp(pyparsing.ParseSyntaxException, regex):
+            parser.parse(src)
+
+    def test_missing_struct_syntax(self):
+        self.syntax_error('struct { uint32_t bar; };',
+                          'Expected identifier \(at char 7\)')
+        self.syntax_error('struct foo uint32_t bar; };',
+                          'Expected "{" \(at char 11\)')
+        self.syntax_error('struct foo { uint32_t bar; ;',
+                          'Expected "}" \(at char 27\)')
+        self.syntax_error('struct foo { uint32_t bar; }',
+                          'Expected ";" \(at char 28\)')
+
+    def test_invalid_type_name(self):
+        self.syntax_error('struct foo { list<of_action_t> bar; }',
+                          'Expected "\(" \(at char 17\)')
+        self.syntax_error('struct foo { uint32_t[10 bar; }',
+                          'Expected "\]" \(at char 24\)')
+
+    def test_invalid_member_syntax(self):
+        self.syntax_error('struct foo { bar; }',
+                          'Expected identifier \(at char 16\)')
+        self.syntax_error('struct foo { uint32_t bar baz; }',
+                          'Expected ";" \(at char 26\)')
+
+
+if __name__ == '__main__':
+    unittest.main()