Reorganization of a Healthcare Products Technology Organization – Proposal

 

Organizational Transition – Healthcare Products Division

[Redacted] Corporation

Executive Sponsor: Don Krapohl

 

Executive Summary


[Redacted] Corporation is presently functionally-organized and biased strongly toward a single customer.  The project efforts that led to this will be drawing down in FY14.  By discounting several outmoded assumptions and capitalizing on changes in environment the organization will be able to overcome space issues, reduce the impact of the partnership dissolution, add business development and test lab capabilities, provide the ability to more rapidly adapt to local changes, simplify communications, and provide line-of-site accountability.  Changes to within our market segment necessitate a more agile support structure that can cater to the needs of multiple customer requirements.  Our realignment post-transition organization chart, transition timeline, and high-level transition plan are included herein.

Observations


Only 35% of personnel provide direct support that cannot be performed offsite

Engineering functions do not need direct customer facility access

Corporate intelligence analysts do not need to interface with customers

[Redacted site] and [Redacted site] locations are entirely unsuitable as warehouse space

Physical space is a persistent concern with partner co-location

Direct engineering support to partner companies is only operations of hosted infrastructure, the [Redacted] product line, and training.

There exists risk of O&M support reduction on award of new contract by customer [Redacted] in FY14

Future changes


The Health Mining Partnership Initiative is to be dismantled in FY13

Personnel requirements for space and equipment need to be defined prior to drawdown

Functional leads to be named by discipline

Contract re-compete and partnering agreements renegotiated in FY13

 

Major customers


Our customers are closely aligned with the proposed organization:  Intelligence producers and consumers (Knowledge division), [Redacted Medical Supply Customer] (IT Support division), [Redacted Healthcare Customer] (Support division), and the other [Redacted] organizations (Portfolio Support and Enterprise Software Engineering divisions).

Intelligence producers and consumers

Mission: Provide IT capabilities and tools for business intelligence enhancement, analytics, and dissemination.  Facilitate information transfer between producers and consumers of knowledge products.

Customer location: Widely distributed, none local.

[Redacted Healthcare Customer]

Mission: Provide direct support to [Redacted Healthcare Customer] for implementation and use of analytic capabilities.

Customer location: Partner-owned/leased office space in [Redacted Healthcare Customer] and annexes.  Some servers and network elements located off-site.

[Redacted Medical Supply Customer]

Mission: Provide limited direct analytic subject matter and product support. Assist in facilitating vendor relationships and engineering/scientific support. Supply hosted infrastructure as a service and datacenter management for warehouse facilities.

 

Customer location: Owned/leased office space in [Redacted] and annexes.  Some servers and network elements located on-site.

Program partners/Enterprise Support

Mission: Provide identical capabilities to other customers’ enhanced capabilities, expanded requirements, capacity planning to continue support.

Customer location: None local.  Large majority not located within a primary facility.

 

Strategy


Restructure the organization in a manner that balances customer-facing footprint, intelligence functions, enterprise engineering, and Enterprise Program support.

  1. Organizational
  • Balance O&M support for partners, form lines of management by industry segment for IT direct support and customer-specific engineering efforts.
  • Add a division for Program support that includes all non-operational support and infrastructure personnel.
  • Retain divisions for Enterprise Software Engineering and Knowledge Management
  • Remove intelligence activities from single executive and align in their own division.
  1. Geographic
  • Emplace only IT direct support personnel at [Redacted Healthcare Customer] and [Redacted Medical Supply Customer]
  • Remove all other personnel to a central location.
  • Move all Enterprise-, Intel-, and Program-related functions away from partner sites.
  1. Communications
  • Email, VOIP, SharePoint, and other core services are presently hosted.
  • Portfolio performance, metrics, announcements, calendars, WARs, and other administrative coordination on SharePoint.
  • IT operations incident/request on BICES service desk tool.
  1. Financial
  • Identify cost-sharing potential with another organization with need of DISA collateral-rated office space with adjacent/attached warehouse/loading facilities.
  • Buy-out or subcontract Brandon facility for remainder of lease.

 

Final-state Organization Chart

information technology company reorganization post-reorg view

Assumptions


SLA-compliant connectivity can be attained in commercial space

Office equipment budget can be reallocated for the new facility

Larger organizational surface area adds noise to communications and decision cycle

Funding will come from management reserve at Atlanta CDC redesign project closure.

 

Analysis


Strengths

  • [Competitor X] is execution-aligned in this way
  • Ability to focus communication more tightly to/from/within customer efforts
  • Provides only one accountable route into and out of each group
  • More fundamental organization into operations, engineering, support, and knowledge
  • Resolves warehouse issues
  • Allows capability surge that limits space requests within partner locations
  • Breaks the organization into elements that simplify execution of SOPs and workflows
  • Optimizes on Service Level Management by coalescing incident/request processes

Weaknesses

  • Must vacate offices in [Redacted Location] and [Redacted Location] and terminate leases
  • Likely to meet significant resistance from Healthcare Products and Energy Divisions as they compete for funding
  • Capital expenses on centralized facility will increase
  • Must SLA-accredit the facility quickly

Opportunities

Direct benefits

  • Easier accommodation of surge space needs
  • Fewer targets for customer off-book requests
  • Improved communications management throughout the division
  • Better alignment with modes of work (operations, support, engineering, knowledge).
  • Improved security for proprietary information

Indirect benefits

  • Adds business development (BD) capability and increases demonstration space
  • Adds to R&D capability
  • Provides potential for test lab environment
  • Collaboration between functional areas are enhanced

Threats

  • Loss of mass at [Redacted Healthcare Customer] could have unknown impacts on future orders (mitigated by scheduled OPT dissolution and probable forced loss of personnel).

 

Transition Plan


1 August – Plan announced

1-7 August – Discover office space options

7-10 August – Document infrastructure needs

13-17 August – Submit requests for equipment, sign lease for 1 Oct, begin facility accreditation request

20-31 August – Document and lay out workspace; order workstations, locks, safes, printers, and phones

3-14 September – Manage facility accreditation request, sign up for utilities and security monitoring

17-28 September – Stage equipment, schedule accreditation site inspection

1-12 October – Move office equipment, set up work areas, set up kitchen and coffee, move out of Brandon facility

15-26 October – Management reserve in case of schedule slip

29 October -2 November – Staff move-in of first group

5 November 2012-2 January 2013– Facility burn-in

3-31 January 2013– Emplace and configure remaining desks and workstations for second tenant group

1 February 2013 – Second group moves in

1 February 2013 – Transition complete

 

 

270-day staffing plan


Site

As-is staffing

To-be staffing

Staff matrix 1 Nov

Staff matrix (Final) 1 Feb

[Redacted Healthcare   Customer] 21 on-site 5 on-site 9 on-site 5 on-site
7 statisticians/data miners–
1 business analyst 2 network engineers 2 network engineers
4 desktop support– 2 desktop support 2 desktop support 2 desktop support
2 trainers
1 web developer – 2 web developers
1 tech writer –
1 configuration manager –
1 manager 1 manager 1 manager 1 manager
1 requirements mgr – 1 requirements mgr/Deputy PgM
1 user experience engineer – 1 IA engineer
Transition complete Feb 2013
[Redacted Medical Supply Customer] 7 on-site 10 on-site 10 on-site 10 on-site
4 systems engineers 4 systems engineers 4 systems engineers 3 software engineers
1 network engineer 2 network engineers 2 network engineers 2 desktop support
1 storage engineer 1 storage engineer 1 storage engineer 1 line-of-business architect
1 manager 1 manager 1 manager 1 business analyst
2 desktop support 2 desktop support 1 report developer
Transition complete Nov 2012 1 business intelligence developer
1 manager
Corporate HQ 13 on-site 28 on-site 25 on-site 28 on-site
1 Director 1 Director 1 Director 1 Director
2 Project Managers 2 Project Managers 2 Project Managers 2 Project Managers
1 business analyst 1 business analyst 1 business analyst 7 statistician/data miners
1 program admin 1 program admin 1 program admin 3 Software Engineers
1 logistician 1 logistician 1 logistician 2 business intelligence developers
7 software engineers 7 software engineers 7 software engineers 1 enterprise architect
6 intel analysts 6 Intel 1 user experience engineer
2 trainers 2 trainers 1 web developer
2 web developers 1 executive assistant
1 technical writer 1 technical writer 1 logistician
1 IA engineer 1 business analyst
1 configuration manager 1 configuration manager 1 technical writer
1 requirements manager/Deputy PgM 1 configuration manager
1 enterprise systems engineer 1 enterprise systems engineer 1 enterprise network engineer
1 enterprise systems engineer
1 IA engineer
1 requirements/release manager
Transition complete Feb 2013

 

Predicting Federal Contracts using Machine Learning Classification in WEKA

Can I predict which contracts will likely be awarded in my area?

By Don Krapohl

  1. Open WEKA explorer
  2. On pre-process tab find the government_contracts.arff file.
  3. Perform pre-processing
    1. Escape non-enclosure single- and double-quotes (’, ”) if using a delimited text version.
    2. Check ‘UniqueTransactionID’ and click ‘Remove’.  Stating the obvious, there is no value in analysis of a continuous random transaction ID, discretization and local smoothing  can lead to overfitting, and it has no predictive value.
    3. If you have saved the arff back into a csv you will have to filter the ZIP code fields RecipientZipCode and PlaceOfPerformanceZipCode back to nominal with the unsupervised attribute filter StringToNominal and DollarsObligated to numeric.
  4. Using the attribute evaluator to explore algorithm merit on the ‘Select Attributes’ tab, use the ClassifierSubsetEval  evaluator with the Naïve Bayes algorithm and a RandomSearch search predicting the Product or Service Code (PSC).  This yields:

Selected attributes: 2,3,4,6 : 4

ContractPricing

FundingAgency

PlaceofPerformanceZipCode

RecipientZipCode

 

This indicates the best prediction of a Product or Service Code using the Naïve Bayes algorithm is a 40% (0.407 subset merit) predictive ability if you know these contract attributes.

  1. Using those attributes to predict PSC, select the Classify tab, bayes classifier -> Naïve Bayes, 10-fold cross validation, predict PSC and click ‘Start’.  The output will indicate F-measure and other attribute significance by class.  An example of a single class result is:

TP Rate   FP Rate   Precision   Recall  F-Measure   ROC Area  Class

0               0.014      0                  0            0                         0.972        REFRIGERATION AND AIR CONDITIONING COMPONENTS

  1. View the threshold for the prediction by right-clicking the result buffer entry at the left, hover over Threshold Curve.  Select the “REFRIGERATION AND AIR CONDITIONING COMPONENTS” for example.  The curve is as follows:

 

Classifier accuracy
Classifier accuracy

 

This shows a 97% predictive accuracy on this class.  The F-Measure visualization further supports this:

 

Lift chart showing classifier coverage
Lift chart showing classifier coverage

To see an analogous cluster visualization using Excel and the SQL Server 2008 R2 addins, see my quick article on Activity Clustering on Geography.

My Google+

Automatic Entity Extraction using openNLP in C#

Entity Extraction and Competitive Intelligence

I have been approached by multiple companies wishing to perform entity extraction for competitive intelligence. Simply put, executives want to know what their competition is up to, they want to expand their company, or they are just performing market research for a proposal. The targets are typically newspaper stories, SEC filing, blogs, social media, and other unstructured content. Another goal is frequently to create intellectual property by way of branded product. Frequently these are Microsoft .net-driven organizations. These are characterized by robust enterprise licensing with Microsoft, a mature product ecosystem, and large sunk cost in existing systems making a .net platform more amenable to their resource base and portfolio.

Making possible a quick-hit entity extractor in this environment are the opensource projects openNLP (open Natural Language Processing) and IKVM, a free java virtual machine that runs .net assemblies. openNLP provides entity extraction through pre-trained models for extraction of several common entity types: person, organization, date, time, location, percentage, and money. openNLP also provides for training and refinement of user-created models.

This article won’t undertake to answer the questions of requirements gathering, fitness measurement, statistical analysis, model internals, platform architecture, operational support, or release management, but these are factors which should be considered prior to development for a production application.

Preparation

This article assumes the user has .net development skill and knowledge of the fundamentals of natural language processing. Download the latest version of openNLP from The Apache Foundation website and extract it to a directory of your choice. You will also need to download models for tokenization, sentence detection, and the entity model of your choice (person, date, etc.). Likewise, download the latest version of IKVM from SourceForge and extract it to a directory of your choice.

Create the openNLP dll

Open a command prompt and navigate to the ikvmbin-(yourProductVersion)/bin directory and build the openNLP dll with the command (change the versions to match yours):
ikvmc -target:library -assembly:openNLP opennlp-maxent-3.0.2-incubating.jar jwnl-1.3.3.jar opennlp-tools-1.5.2-incubating.jar

Create your .net Project

Create a project of your choice at a known location. Add a project reference to:
IKVM.OpenJDK.Core.dll
IKVM.OpenJDK.Jdbc.dll
IKVM.OpenJDK.Text.dll
IKVM.OpenJDK.Util.dll
IKVM.OpenJDK.XML.API.dll
IKVM.Runtime.dll
openNLP.dll

Create your Class

Copy the code below and paste it into a blank C# class file. Change the path to the models to match where you downloaded them. Compile your application and call the EntityExtractor.ExtractEntities with the content to be searched and the entity extraction type.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace NaturalLanguageProcessingCSharp
{

public class EntityExtractor
{
///
/// Entity Extraction for the entity types available in openNLP.
/// TODO:
/// try/catch/exception handling
/// filestream closure
/// model training if desired
/// Regex or dictionary entity extraction
/// clean up the setting of the Name Finder model path
/// Implement entity extraction in other languages
/// Implement entity extraction for other entity types

/// Call syntax: myList = ExtractEntities(myInText, EntityType.Person);

private string sentenceModelPath = “c:\\models\\en-sent.bin”; //path to the model for sentence detection
private string nameFinderModelPath; //NameFinder model path for English names
private string tokenModelPath = “c:\\models\\en-token.bin”; //model path for English tokens
public enum EntityType
{
Date = 0,
Location,
Money,
Organization,
Person,
Time
}

public List ExtractEntities(string inputData, EntityType targetType)
{
/*required steps to detect names are:
* downloaded sentence, token, and name models from http://opennlp.sourceforge.net/models-1.5/
* 1. Parse the input into sentences
* 2. Parse the sentences into tokens
* 3. Find the entity in the tokens

*/

//——————Preparation — Set Name Finder model path based upon entity type—————–
switch (targetType)
{
case EntityType.Date:
nameFinderModelPath = “c:\\models\\en-ner-date.bin”;
break;
case EntityType.Location:
nameFinderModelPath = “c:\\models\\en-ner-location.bin”;
break;
case EntityType.Money:
nameFinderModelPath = “c:\\models\\en-ner-money.bin”;
break;
case EntityType.Organization:
nameFinderModelPath = “c:\\models\\en-ner-organization.bin”;
break;
case EntityType.Person:
nameFinderModelPath = “c:\\models\\en-ner-person.bin”;
break;
case EntityType.Time:
nameFinderModelPath = “c:\\models\\en-ner-time.bin”;
break;
default:
break;
}

//—————– Preparation — load models into objects—————–
//initialize the sentence detector
opennlp.tools.sentdetect.SentenceDetectorME sentenceParser = prepareSentenceDetector();

//initialize person names model
opennlp.tools.namefind.NameFinderME nameFinder = prepareNameFinder();

//initialize the tokenizer–used to break our sentences into words (tokens)
opennlp.tools.tokenize.TokenizerME tokenizer = prepareTokenizer();

//—————— Make sentences, then tokens, then get names——————————–

String[] sentences = sentenceParser.sentDetect(inputData) ; //detect the sentences and load into sentence array of strings
List results = new List();

foreach (string sentence in sentences)
{
//now tokenize the input.
//”Don Krapohl enjoys warm sunny weather” would tokenize as
//”Don”, “Krapohl”, “enjoys”, “warm”, “sunny”, “weather”
string[] tokens = tokenizer.tokenize(sentence);

//do the find
opennlp.tools.util.Span[] foundNames = nameFinder.find(tokens);

//important: clear adaptive data in the feature generators or the detection rate will decrease over time.
nameFinder.clearAdaptiveData();

results.AddRange( opennlp.tools.util.Span.spansToStrings(foundNames, tokens).AsEnumerable());
}

return results;
}

#region private methods
private opennlp.tools.tokenize.TokenizerME prepareTokenizer()
{
java.io.FileInputStream tokenInputStream = new java.io.FileInputStream(tokenModelPath); //load the token model into a stream
opennlp.tools.tokenize.TokenizerModel tokenModel = new opennlp.tools.tokenize.TokenizerModel(tokenInputStream); //load the token model
return new opennlp.tools.tokenize.TokenizerME(tokenModel); //create the tokenizer
}
private opennlp.tools.sentdetect.SentenceDetectorME prepareSentenceDetector()
{
java.io.FileInputStream sentModelStream = new java.io.FileInputStream(sentenceModelPath); //load the sentence model into a stream
opennlp.tools.sentdetect.SentenceModel sentModel = new opennlp.tools.sentdetect.SentenceModel(sentModelStream);// load the model
return new opennlp.tools.sentdetect.SentenceDetectorME(sentModel); //create sentence detector
}
private opennlp.tools.namefind.NameFinderME prepareNameFinder()
{
java.io.FileInputStream modelInputStream = new java.io.FileInputStream(nameFinderModelPath); //load the name model into a stream
opennlp.tools.namefind.TokenNameFinderModel model = new opennlp.tools.namefind.TokenNameFinderModel(modelInputStream); //load the model
return new opennlp.tools.namefind.NameFinderME(model); //create the namefinder
}
#endregion
}
}

Hadoop on Azure and HDInsight integration
4/2/2013 – HDInsight doesn’t seem to support openNLP or any other natural language processing algorithm. It does integrate well with SQL Server Analysis Services and the rest of the Microsoft business intelligence stack, which do provide excellent views within and across data islands. I hope to see NLP on HDInsight in the near future for algorithms stronger than the LSA/LSI (latent semantic analysis/latent semantic indexing–semantic query) in SQL Server 2012.

My Google+