Predicting the Best Parameters for Federal Business Capture using WEKA

Which contract parameters should I choose?

What combination of features might I pursue to raise my probability of contract award?

  1. Open WEKA explorer
  2. On pre-process tab find the government_contracts.arff file.
  3. Perform pre-processing
    1. Escape non-enclosure single- and double-quotes (\’, \”) if using a delimited text version.
    2. Check ‘UniqueTransactionID’ and click ‘Remove’.  Stating the obvious, there is no value in analysis of a continuous random transaction ID, discretization and local smoothing  can lead to overfitting, and it has no predictive value.
    3. If you have saved the arff back into a csv you will have to filter the ZIP code fields RecipientZipCode and PlaceOfPerformanceZipCode back to nominal with the unsupervised attribute filter StringToNominal and DollarsObligated to numeric.
    4. On the Associate tab, select the Apriori algorithm and click ‘start’.  The results:

 

WEKA association rules for contract feature prediction
Predicting Award Parameters

 

This indicates that selecting for Firm Fixed Price contracts for the VA, if you are located in ZIP 83110 and the work will be performed within ZIP 83110 you may have an advantage in the acquisition.

My Google+

Integrated Approach to Organizational Development

Organizational Development Through Distilled Best Practices

“Pain is temporary. Quitting lasts forever.”
― Lance Armstrong, Every Second Counts

Your small company is bulging at the seams on its way to mid-size stardom and needs structure to let you reach your next goal. Rapid organizational maturation, organizational development, or organizational transformation are difficult with the profit-driving aspects of the business mounting pressure on disconnected technical focus areas. Each knowledge area or domain, such as project management, software development, data analysis, reporting, systems administration, and service desk tier 2/3 may be performed by one or two individuals. To advance, the company must grow into specialization and with that comes the additional problems of aligning work across departments.

Continue reading Integrated Approach to Organizational Development

Big Data Analysis of Classified Information (Part 1)

Big Data Analytics in a Secure Environment

This 8-part article will outline the elements of using Big Data technologies for the analysis of classified information.  The topic will be divided to address each facet of big data analysis of classified information:

Part 1 – applied big data architecture

Part 2 – information flow

Part 3 – organizational alignment

Part 4 – roles and responsibilities

Part 5 – principal phases and earned value benchmarks

Part 6 – Data fusion

Part 7 – Knowledge creation

Part 8 – Visualization

Part 9 – Summary and review

The challenges of multi-level secure operating systems have been undertaken by several companies with arguably the largest being SUN’s Trusted Solaris 2.5.1, which is based on Solaris 2.5.1, Common Desktop Environment 1.1, and Solstice AdminSuite 2.1. The ITSEC certification granted by the UK is not presently accepted by NSA and so does not serve as a pre-built secure OS capability. General Dynamics C4S has also built a capability based on a Linux OS that does not mandate a SPARC architecture, making it more friendly to open-source platforms. These initiatives are generating the potential for data fusion, real-time analytics, and predictive analytics across gov, NIPR, SIPR, JWICS, and coalition networks. The architecture in practice is non-trivial but a generalized TOGAF Technical Reference Model based on Linux and open-source HADOOP, Mahout, openNLP, Cassandra, Hive, and PIG is now possible to construct.

Detailed Technical Reference Model (Showing Service Categories)
Fig 1. Detailed Technical Reference Model (Showing Service Categories)

 The TOGAF Architectural Model

A reference architecture is useful as a starting point for building an enterprise-specific architecture and is useful to reduce the risk that any design facet is skipped.  The Open Group Architecture Framework (TOGAF) is one of the more widely-adopted and is the concept upon which many domain-specific architectural standards are built.  Per TOGAF,  “The TOGAF Foundation Architecture is an architecture of generic services and functions that provides a foundation on which more specific architectures and architectural components can be built. This Foundation Architecture is embodied within the Technical Reference Model (TRM), which provides a model and taxonomy of generic platform services.  The TRM is universally applicable and, therefore, can be used to build any system architecture.”  1

At its most fundamental, TOGAF is broken into Application Software, Application Platform, and Communications Infrastructure connected by Applications Platform Interfaces and Communications Infrastructure Interfaces as depicted in Figure 1. This construct provides a structure for top-down planning of service catalog elements and pre-positions for follow-on plans for ITIL Service Catalog construction. Service elements connect infrastructure to applications and are used to further visualize dependence.

Mapping the TRM to Open Source Big Data Technologies

Open Source Software (OSS) can be part of a cost-effective long-range strategy for many organizations. The US Government’s CIO in 2003 and again in 2009 declared that open source technologies should be considered closely when electing technologies and clarified the misconception that government-created versions of these technologies must be openly distributable to the public. Since these declarations the Apache Foundation technologies have figured highly in the US Government’s strategic portfolio, especially within the big data and analytics domain. Widely adopted platforms with security accreditation include Hadoop, Mahout, openNLP, Hive, Pig, Cassandra, SOLR, Lucene, the Apache Web Server, and many others. A general mapping of these technologies against the target big data architecture along with the capabilities of a secure operating system indicate complete coverage of core, non-specialized capabilities.

A robust open source portfolio for the analysis of classified information in this design includes capabilities for structured data analysis, unstructured data analysis, knowledge discovery, and complete multi-level classification and caveat isolation includes:

  • Secure OS – The secure operating system. Examples are a multi-level secure Linux or multi-level secure Solaris.
  • Router – The multi-level secure router.  This provides TCP packet extensions and routing based on security classification markings.
  • Apache – The Apache Web Server, which provides HTML and other rendering services.
  • HDFS – Hadoop File System is the persistence (storage) structure that allows Hadoop to distribute data and operate on it.
  • Hadoop – Scalable Massive Parallel Processing architecture for distributed and scalable computing applications.
  • Mahout – A machine learning platform implemented on Hadoop for classification and prediction of discrete and continuous data.
  • openNLP – A natural language processing platform on Hadoop for unstructured text analysis (sentence marking, tokenizing, part-of-speech extraction, entity extraction, etc.)
  • SOLR – Opensource Apache search platform built on Lucene.
  • Lucene – Full-text indexer and search engine.  Lucene will accept outputs from Mahout and openNLP models to aid searching results of analysis.
  • Hive – Apache Hive is a data warehouse infrastructure that may be used for content storage, retrieval, indexing, and other core DBMS functions.
  • HiveSQL – SQL-like Hive DDL/DML, non ANSI-92 compliant used to warehouse massive parallel datasets and operate upon them.

In the next part in the series we will look at the logical architecture and explore communication sequences within a few common scenarios.

 

References: