AI for C# Developers

From Semantic Kernel Awareness to Microsoft Agent Framework

Note on authorship and collaboration: This guide was co-created through an AI-assisted workflow. The human contributor (Bill) set goals, constraints, and provided iterative feedback; GitHub Copilot generated structure, terminology definitions, and suggested wording, then applied edits in the workspace. Treat this as a collaborative, living document reflecting human intent assisted by AI.

Status: Draft v0.1 (skeleton) Draft notice: The server framework is being refactored to a layered DI architecture; sections may lag the latest design until refactor completion. Audience: C#/.NET 8–9 developers building AI apps without Azure, with a smooth Azure migration path

Project Links (read, run, source)

Note: You may be reading this on the blog or directly from the repository. The links above are canonical for project source and a running demo.


1) Goals and Scope

  • Help C# developers understand AI terminology, core flows, and architecture choices.
  • Start non-Azure (OpenAI API + local/open-source vector DB). Keep an easy on-ramp to Azure.
  • Anchor example: “NotepadAI App” — an open-source starter application enabling document/blog ingestion, semantic search, and RAG answers; a separate private "Saints App" will later build atop NotepadAI for Christian studies and will drive iterative improvements back into the open-source platform.

2) Prerequisites

  • Skills: C# async/await, dependency injection, minimal APIs/ASP.NET Core, REST/JSON basics.
  • Tooling: .NET 8/9 SDK, Git, Docker (optional), VS/VS Code, Postman/REST Client.
  • Accounts/Keys: OpenAI API key (non-Azure baseline). Azure account optional for migration.

3) Terminology (Glossary)

  • LLM: Large Language Model. A deep neural network (Transformer architecture) trained on massive text corpora (books, documentation, code, web data) to predict the next token.
    • Capabilities:
      • Interpret natural language & follow instructions
      • Generate / transform / summarize text
      • Classify and extract structured data
      • Assist reasoning via pattern completion
      • Provide multilingual support (model-dependent)
    • Limitations:
      • Fixed context window (e.g., 16K–200K tokens per request)
      • Stateless between calls unless prior messages are re-sent
      • Susceptible to hallucination (may fabricate facts)
      • No real-time external knowledge unless you retrieve & inject (RAG/tools)
      • Sensitive to prompt phrasing and ordering
    • Best Practices:
      • Keep prompts concise and role-focused
      • Supply grounded retrieved context (avoid unsupported speculation)
      • Track token usage (cost & latency)
      • Summarize older conversation turns to conserve window
      • Enforce output schema (JSON / markdown) for predictable parsing
      • Log prompts & responses for observability and QA
  • Token: The unit of text the model processes and bills against.
    • What it is: an integer id for a text piece produced by a tokenizer (e.g., byte-pair/WordPiece). Token ≠ word; it may be a word, subword, punctuation, or whitespace.
    • When created:
      • Before embedding: input text is tokenized; the embedding model consumes tokens to produce a vector.
      • Before generation: the entire prompt (system + user + retrieved chunks + tool outputs) is tokenized; these are input tokens.
      • During generation: the model predicts one token at a time; these are output tokens.
    • Why it matters:
      • Context window is measured in tokens (input + output), limiting how much you can send and receive in one call.
      • Cost and latency scale with token counts; most providers price input and output tokens separately.
      • Truncation and splitting happen at token boundaries.
    • Counting & lengths:
      • Token != character; English averages ~3–4 characters/token, but varies by text and tokenizer.
      • Spaces/newlines/punctuation can be separate tokens depending on tokenizer.
      • Different models have different tokenizers and max windows; don’t assume interchangeability.
    • Display vs storage:
      • Tokens are not UI elements or database keys.
      • The UI displays detokenized text (decoded from token ids) returned by the model.
    • Practical tips:
      • Budget tokens across: system + retrieved context (chunks) + user + expected output.
      • Keep chunks concise; remove boilerplate to save tokens.
      • Use a tokenizer utility in .NET to estimate prompt size and cap top-k retrieval dynamically.
      • Cache token counts for static corpus content to speed prechecks.
    • Example:
      • "St. Anthony of Egypt" may tokenize roughly as ["St", ".", "ĠAnthony", "Ġof", "ĠEgypt"] (illustrative; actual pieces depend on the tokenizer).
    • Misconceptions:
      • A token is not a security key or lookup id; it’s a processing unit for the model.
      • Tokens are not raw bytes; they’re vocabulary ids specific to a model’s tokenizer.
  • Embedding: A fixed-length numeric representation (vector) of text produced by an embedding model such that semantically similar texts are near each other in vector space.
    • Models & Dimensions: e.g., OpenAI text-embedding-3-small (1536 dims), text-embedding-3-large (3072 dims); pick one per index and stay consistent for queries.
    • Workflow:
      • Ingest: chunk documents → compute embedding for each chunk → store vector + text + metadata in vector store.
      • Query: embed user question → similarity search → retrieve top-k chunks → pass retrieved text to LLM.
    • Uses:
      • Semantic search (RAG)
      • Clustering/grouping
      • Deduplication & near-duplicate detection
      • Lightweight classification/tag suggestion
      • Reranking (combine lexical + semantic signals)
    • Best Practices:
      • Record model name/version + dimension in schema
      • Normalize vectors if store expects unit length (cosine)
      • Cache embeddings by content hash (avoid recompute)
      • Batch API calls for throughput / rate limit efficiency
      • Keep chunks topical; strip boilerplate/HTML noise
      • Use multilingual model if corpus spans languages
    • Pitfalls:
      • Mixing different embedding models between index & query
      • Dimension mismatch vs. index schema
      • Ignoring rate limits (add retry/backoff)
      • Embedding giant unchunked documents (low precision)
      • Failing to re-embed after model or chunking change
      • Using different similarity metric than index assumption
    • Cost & Privacy:
      • Cheaper than chat completions per token processed
      • Still external unless self-hosted model (consider PII redaction)
      • Cache + deduplicate reduces spend dramatically
  • Chunk: A semantically coherent segment (a self-contained passage focused on one idea—e.g., a paragraph, section, or blog snippet) of a document created during ingestion for retrieval/embedding. Typically 300–1000 tokens with optional overlap to preserve context; stored with metadata (author, tags, category, publishDate, domain) and original source reference.
  • Vector: A numeric array (float[]) that encodes the meaning of text in a high-dimensional space. Example: "desert hermit" → [0.12, -0.04, 0.98, …] (dimension depends on the embedding model; e.g., 384, 768, 1024, 1536 for OpenAI text-embedding-3-small, 3072 for larger models). Similar texts produce vectors that are close under cosine similarity or dot product (e.g., "anchorite monk" is close; "accounting spreadsheet" is far). In practice you: (1) embed each chunk and store its vector with metadata, (2) embed the user query, (3) retrieve nearest neighbors by similarity. Keep indexing and querying with the same model and similarity metric; some stores expect normalized vectors for cosine.
  • Vector Store: Database optimized for nearest-neighbor search over vectors (e.g., Qdrant HNSW, Weaviate, Azure AI Search vector fields). You must set the field dimension to match the embedding model.
  • Qdrant: Open-source vector database focused on semantic search. HNSW-based ANN index, payload metadata for filtering, REST/gRPC APIs, easy local Docker usage; accessible from .NET via HTTP clients.
  • Weaviate: Open-source vector database with a schema-first approach and hybrid search. GraphQL/REST APIs, modules for reranking/transformers, good Docker support; accessible from .NET via HTTP clients.
  • Similarity Search: Find nearest vectors (cosine/dot/Euclidean) to a query vector; returns top-k chunks with scores.
  • RAG: Retrieval-Augmented Generation. Architecture that combines external retrieval with generation to produce grounded answers.
    • Core Components: retriever (vector/hybrid search), chunk store with metadata, prompt assembler, LLM, post-processor (citations/formatting).
    • Workflow: embed query → retrieve top-k relevant chunks (optionally filter by metadata) → build prompt (system + user + context) → generate answer → attach citations/scores.
    • Benefits: improved factual accuracy, domain specificity, dynamic updates without fine-tuning, transparent sourcing.
    • Best Practices: limit chunk count (balance coverage vs. token budget), deduplicate near-identical chunks, enforce source attribution, fallback to clarification if retrieval confidence low, monitor retrieval precision/recall.
    • Pitfalls: prompt stuffing (too many tokens), low-quality chunking (mixed topics), missing metadata filters, stale embeddings after corpus change, ignoring retrieval scoring thresholds.
    • Evaluation Metrics: answer correctness (manual or automated), citation validity, retrieval hit rate, latency breakdown (embed vs. search vs. generate), token usage.
  • Agent: An active decision-making wrapper around an LLM that can invoke tools, manage memory, and apply policies.
    • Responsibilities: interpret user intent, decide which tool(s) to call, integrate tool outputs into prompts, manage conversation context, enforce guardrails.
    • Structure: system prompt (role/policies), tool registry (capabilities), memory interfaces (short-term chat history + long-term vector store), orchestrator/planner.
    • Memory Types: ephemeral (current conversation turns), persistent (vector store/domain data), summarizations (compressed history), scratchpad (intermediate reasoning steps if supported).
    • Best Practices: keep tool surface minimal & well-described, validate tool outputs before injection, cap conversation length via summarization, log decisions for observability, isolate domain-specific rules in system prompt.
    • Pitfalls: over-broad tools that leak sensitive data, unbounded conversation growth, inconsistent tool naming, mixing retrieval & generation without grounding checks, silently failing tool calls.
    • Observability: log selected tools, latency per tool, tokens in/out, retrieval scores, error/retry counts.
  • Tool: An executable capability the agent can call (search DB, call API, run code). Formerly “skill” in SK.
  • Orchestration: The control layer that sequences agent actions, tool invocations, and prompt assembly.
    • Patterns: single-step (direct answer), multi-step planning (decompose tasks), tool chaining (search → summarize → format), conditional branching (retry/fallback), parallel retrieval (multiple indices).
    • Responsibilities: choose next action, manage error handling/retries, consolidate tool outputs, enforce ordering constraints, produce final response package.
    • Error Handling: retries with backoff for transient failures, circuit-breaker for failing tools, graceful degradation (fallback answer explaining limitation).
    • Best Practices: explicit tool metadata (rate limits, cost hints), timeouts per tool, structured intermediate state, minimal serialization overhead, metrics instrumentation.
    • Pitfalls: tight coupling between agent and concrete tools (hard to swap), hidden side-effects, lack of backpressure under high request volume, missing telemetry (hard to debug quality issues).
    • Metrics: average orchestration steps per query, tool success rate, cumulative latency, token amplification (retrieved vs. used), failure modes distribution.
  • Memory: Short-term conversation state and/or long-term knowledge (e.g., vector store).
  • MCP: Model Context Protocol. A standard for interoperable communication between models, agents, and tools.
    • Purpose: decouple agent logic from specific tool/model implementations, enable portable tool definitions.
    • Abstractions: tools (declared capabilities & schemas), resources (data references), messages/events (invocation, result, error), context envelopes (state passed along call chain).
    • Flow: agent formulates tool request → MCP transports structured call → tool executes (DB/vector search/etc.) → returns standardized result payload → agent integrates into next reasoning step.
    • Advantages: interoperability, composability (share tools across agents), consistent error & schema handling, easier migration (swap providers), improved observability via standardized events.
    • Best Practices: precise tool schemas (inputs/outputs), include versioning, define error codes, keep payloads lean (avoid huge raw blobs), secure endpoints (auth/z), validate responses before prompt injection.
    • Pitfalls: ambiguous schema fields, oversized responses inflating context window, ignoring version changes, lack of auth leading to misuse, mixing untrusted raw data into prompts without sanitization.
    • Observability: log tool invocation id, duration, payload size, version, error code.
  • Prompt: The structured instruction + input payload you send to an LLM (often represented as ordered messages: system, user, assistant, tool). A good prompt constrains behavior, sets role, supplies task, provides context (retrieved chunks), defines output format, and lists guardrails. Typical RAG pattern:
    • System: role + high-level rules (cite sources, domain scope).
    • Context: retrieved chunks injected (each tagged with source/ID).
    • User: the question or task.
    • Optional tool messages: prior tool outputs (e.g., search results JSON).
    • Output directive: “Return JSON with fields: answer, citations[].” Anti-patterns: stuffing unrelated chunks, contradictory instructions, overlong examples that consume context window. Prompts are NOT automatically visible to end users; you choose what to expose (e.g., only the final answer + citations).
  • Context Window: The maximum number of tokens (input + model generated output) the LLM can hold in a single request. If the sum of: system + prior messages + injected chunks + user question + expected output tokens > window size, older or excess tokens must be truncated or excluded. Drives design decisions: chunk size, top-k retrieval, summary of prior conversation. Not what the user sees; it's the internal working memory limit of the model for that call.
  • Grounding Data: External facts injected into prompts to reduce hallucination.
  • Hallucination: Confident but incorrect output; mitigated via RAG, citations, constraints.
  • Vector Store Selection (1-line): Qdrant for fast semantic + filtering simplicity, Weaviate for schema/hybrid & modules, FAISS for embedded prototyping (in-process, minimal infra), Azure AI Search for managed, scalable hybrid + enterprise integration.

4) High-Level Flows

  • Ingestion Flow
    • Raw docs (Markdown/HTML/TXT) → Chunk → Add metadata (author, tags, category, publishDate, domain) → Embedding → Upsert to vector DB.
  • Query Flow
    • User query → Agent → Tool: vector/search (+ optional filters) → Retrieve top-k chunks → Compose prompt → LLM → Answer + citations.
  • Migration Flow
    • Start: OpenAI + Qdrant/Weaviate (open-source vector stores) + ASP.NET Core.
    • Migrate: Azure OpenAI + Azure AI Search + App Service/Container Apps + Key Vault + Monitor.

5) Semantic Kernel (Awareness)

  • What it is: A developer library (C# and Python) for orchestrating prompts and LLM “functions,” popular for early prototyping.
  • Core concepts (recognizable terms):
    • Kernel: central container that wires up services (models, memory, connectors) and invokes functions.
    • Skill/Plugin: a named collection of functions; can be native (C# methods) or semantic (prompt templates).
    • Function: an invocable unit (native or prompt-based) with descriptions, inputs, and outputs.
    • Planner: components that generate a plan (a sequence of functions) from a high-level goal (e.g., Action/Sequential planners).
    • Prompt Template: parameterized prompt with inputs and execution settings (temperature, max tokens, etc.).
    • Memory: abstractions for storing and retrieving embeddings/metadata via connectors (e.g., Azure Cognitive Search, Qdrant via community connectors).
    • Connectors: integrations to external services and data sources.
    • Context Variables: shared bag of inputs/outputs passed between functions in a pipeline.
  • Typical workflow: register models and memory → import skills/plugins → use a planner to build a plan → execute plan step-by-step (each step reads/writes context) → return final result.
  • Strengths: approachable mental model; easy to bind C# methods as tools; prompt templates + function descriptions; worked well for single-agent demos and small pipelines.
  • Limitations: orchestration and multi-agent patterns required boilerplate; planning reliability varied by scenario; uneven connector quality; less standardized tool schema; harder to scale complex agent workflows.
  • Evolution: not deprecated, but Microsoft Agent Framework is the successor with unified orchestration, MCP-native tools, clearer memory model, and production-focused APIs.
  • Migration map (quick):
    • Kernel → Agent
    • Skill/Function → Tool
    • Planner → Orchestrator
    • Memory connectors → Agent memory + vector tool/retriever
  • When to keep SK: maintaining existing apps or quick one-off prototypes already built on SK.
  • When to choose Agent Framework: new builds, multi-tool/multi-agent systems, MCP integration, clearer migration to Azure.

6) Microsoft Agent Framework (Path Forward)

  • Value: Unified orchestration, MCP-native tools, simpler APIs, production-first.
  • Core pieces:
    • Agent: Encapsulates reasoning, memory, and tool usage.
    • Tool: Declarative capability (e.g., vector search, summarization, fetch biography).
    • Orchestrator: Plans/decides tool calling and error handling.
    • Memory: Conversation state + external knowledge via vector store.
  • Non-Azure baseline
    • Models: OpenAI GPT + text-embedding-3-small (or compatible).
    • Vector DB: Qdrant, Weaviate, or local FAISS.
    • API: ASP.NET Core minimal API/Controllers.
  • Azure path (drop-in replacements)
    • Models: Azure OpenAI (same API surface via SDKs).
    • Vector: Azure AI Search (vector + hybrid search, filters, scaling).
    • Platform: App Service/Container Apps + Key Vault + Monitor/Log Analytics.

Visual Studio/.NET Aspire: Cloud-ready local development

  • What it is: an opinionated stack (templates + tooling) in .NET 8/9 and Visual Studio for building cloud-native apps locally, with first-class Azure deployment when ready.
  • Why it helps for AI:
    • Orchestrates multi-project solutions locally via an AppHost (e.g., NotebookAI.AppHost) and shared ServiceDefaults (e.g., NotebookAI.ServiceDefaults) for resiliency, health checks, and OpenTelemetry.
    • Runs dependencies (e.g., Qdrant/Weaviate containers) alongside your API/worker services with unified configuration (env vars, connection strings), no Azure subscription required during dev.
    • Promotes cloud-neutral code: swap OpenAI→Azure OpenAI or Qdrant→Azure AI Search primarily via configuration and DI wiring, not code rewrites.
    • Streamlines deployment: Visual Studio publish targets (e.g., Azure Container Apps/App Service) understand Aspire conventions (health probes, logging, secrets), making the transition from local to Azure low-friction.
    • Secrets & config: use User Secrets locally; map to Azure Key Vault/App Config in prod without changing code.
  • ROI: prototype and iterate locally without cloud spend; when you need to scale, the same Aspire-based solution is already structured for Azure deployment with minimal changes.

7) Anchor Example: NotepadAI App (Reference Architecture)

  • Purpose: Open-source starter for AI-enabled content management (upload documents, author blogs, semantic search, RAG answers) with extensibility via dependency injection so developers can tailor or extend features; a private Saints App will build atop this framework for a specialized religious study corpus.
  • Projects (example layout)
    • Web API (NotebookAI.Server): endpoints /ingest/query.
    • Agents (NotebookAI.Services): agent definition + system prompt + orchestration.
    • Tools (NotebookAI.Services): VectorSearchToolSummarizeContentTool, optional domain-specific tools.
    • Data/Infra: Vector DB client, chunking/embedding service.
  • Data model (chunk)
    • id, text, embedding, author, tags, category, publishDate, source, domain.
  • Request/Response
    • Query request: text + optional filters (tags, category, domain, topK).
    • Response: answer, citations (source, snippet, score), tokens/latency.

8) Series Roadmap (Build in Small Steps)

(Current Status: Actively refactoring server framework to a layered DI architecture—roadmap execution is paused and will resume immediately after this refactor is complete.)

  1. Project setup (.NET 8/9, packages, config, secrets).
  2. Chunking + embeddings service (stream-safe chunk sizes, overlap, metadata extraction).
  3. Vector store integration (Qdrant/Weaviate/Azure AI Search) + schema.
  4. Tools in Agent Framework (vector search, summarize, format citations).
  5. NotepadAI agent (system prompt, guardrails, tool wiring).
  6. RAG prompt assembly (top-k selection, metadata filters, answer style).
  7. Web API endpoints + Angular-friendly DTOs.
  8. Observability (structured logs, timing, token usage, retrieval metrics).
  9. Quality & safety (grounding checks, citations, evaluation set).
  10. Azure migration checklist (resource mapping, config, CI/CD, secrets, monitoring).

9) Prompt & Memory Strategy (Draft)

  • System prompt template tuned for mixed document/blog corpus; require citations.
  • Retrieval policy: top-k, diversity by source, metadata filters (tags/category/domain).
  • Conversation: short-term chat memory (summarized) vs. long-term knowledge (vector store).
  • Guardrails: refuse to answer outside corpus domain unless explicit confirmation (Saints App can further narrow domain to religious studies).

10) Quality, Safety, and Cost

  • Hallucination reduction: strict use of retrieved context, quote snippets, expose sources.
  • Safety: optional content filters; corpus/domain scope prompts (Saints App adds theology-specific constraints).
  • Cost/latency: shorter chunks, careful top-k, cache embeddings, reuse conversations.

11) Migration Path Details

  • Abstractions: IEmbeddingServiceIVectorStoreClientIRetrieverIAgentAdapter.
  • Swap implementations: OpenAI → Azure OpenAI; Qdrant/Weaviate → Azure AI Search.
  • Platform: Containerize; use Key Vault for secrets; monitor with App Insights.

12) Checklists (Appendix)

  • Ingestion
    •  Normalize format;  Chunk;  Metadata;  Embed;  Upsert;  Verify recall.
  • RAG response
    •  Retrieve top-k;  Deduplicate;  Assemble prompt;  Answer;  Cite sources.
  • Migration readiness
    •  Interfaces in place;  Config by environment;  Dockerized;  Health/metrics;  Secrets externalized.

13) Next Steps

  • Flesh out each section with concrete code samples for .NET 8/9.
  • Provide minimal Qdrant/Weaviate docker-compose for local dev.
  • Add Azure AI Search mapping guide and sample ARM/Bicep templates.

References

  • Microsoft Agent Framework docs and migration guide (Semantic Kernel → Agent Framework).
  • OpenAI embeddings and chat completion APIs.

Understanding Dependency Injection

IOC

LinqPad Script: WeatherForecastR5.linq (12.09 kb)

I'll start at the end (literally) and give the key information you'll need to know about dependency injection.  WebApi and ASP.NET Core applications use a dependency injection system to instantiate classes; in the case of this application, when a route is selected (figure 10b lines 211-213) the class for that route is instantiated and then invoked, e.g., HomePage, WeatherPage, and ToggleService.

the IOC system (which I'll just refer to as system) will look in its service collection registrations (figure 10a lines 174-183) to not only instantiate the class, but also provide its parameters.  The registrations will tell the system how to instantiate a class, e.g., as Transient (new instance each request), Scoped (per session / request), and Singleton (everyone shares the same instance).  The difference between scoped and singleton is that if 5 people hit the Website at the same time, each will get their own scoped instance, which is isolated from the other 4 users.  Within a session, the scoped instance behaves as a singleton, but only for that user.   Where singletons instances will be shared by "every" user.

The system uses constructor injection to instantiate and invoke the class [and its parameters].   By default, the system will look for the constructor with the largest number of parameters, get instances for each of the parameters, instantiate the class, and then invoke the class constructor with the parameters.   All classes and parameters must be declared in the service registrations, aka "container".    

Note that as each parameter is instantiated, that it's constructor parameters are also looked up in the container, instantiated and provided.   This is referred to as propagating the dependency chain; as long as "new" is never used to instantiate a class (breaking the chain) then you'll be able to simply put an interface or class in any class constructor and the system will give you an instance for it. 

Understanding this is the key, and paramount, to understanding the IOC/DI system.  It is the essence of Inversion of Control (IOC), aka Dependency Injection (DI).  Inversion of control meaning that instead of you instantiating a class, providing all of the constructor parameters, and invoking the class - the system does it for you.


Figure 1. Overview of application running

With basics out of the way.  All that remains is understanding the function of each class.  We'll cover each of the following with an overview of each classes code.  You'll find that there is a clear separation of concerns with each having a single responsibility; there is not a lot of code in each class, it does one thing, and it does it well.


Figure 2.  Skeleton view of application components

The following are the HomePage, WeatherPage, and ToggleService.  For the home page we'll introduce a second IOC Unity Container, unlike the system's container, the Unity Container supports Setter injection (discussed below) and allows you to register additional interfaces, classes, and factories on the fly.   With the system container, you'll find that you can only register during system bootstrapping - once the container is built, you cannot add any more registrations.  

You'll see that we provide an instance of IUnityContainer [in image below] and use it to instantiate (resolve) the IWeatherFormatter instance.   This uses a factory pattern, that based on the current value of IsJson (figure 10a lines 166-171) the container will provide either a JsonFormatter or TableFormatter instance.

Setter injection will kick in because these implementations of IWeatherFormatter both have the property below;
   [Dependency] Public IFoo Bar {get;set;} 

The [Dependency] tells the Unity container that it needs to populate this property in the same manner as it does constructor parameters; it provides an instance.  This is referred to as Setter injection you'll find that the system and unity both use different values (reference figure 10b and the comments on line 198-203 as to why).

Armed with the knowledge of setter injection, you should now be able to look at the code in figure 9 for Foo and understand how the "Bar" class will return "This is FooBar" for it's GetMessage() function.  

Figure 3.  Pages and service

Below we see the results of the HomePage being clicked with the TableFormatter.


Figure 4. Home page

Below we show the results of the WeatherPage being clicked with TableFormatter


Figure 5. Weather forecast page

Below we show that the ToggleService will toggle the IsJson property which is then returned (via bodyHtml) to the invoking process (in HtmlBase figure 11).  Once the state is toggle any subsequent Home or Weather clicks will result in json being displayed.


Figure 6. Toggle service

Below is the key parts to the HtmlBase, which our HomePage, WeatherPage, and ToggleService derive from.


Figure 7. HtmlBase class

Below we show our TableFormatter and JsonFormatter components


Figure 8. Formatters (json and html table)

We use IFoo to demonstrate how dependencies are propagated, and automagically populated, by either constructor or setter injection.


Figure 9. Foo

The magic happens in the container.  The system will require that all dependencies are registered so that it knows how to instantiate a components lifetime (transient, scoped, or singleton) and provide an instance.  Below the code is commented.


Figure 10a First part of WebAppBuilderExtension

Here we show how we can do a late registration (after build on line 204) and as a result change the setting for IFoo in the unity container - it will have a different implementation now then the system.   We also demonstrate how MiddleWare can use these registrations - it will send information to the console base on the registered implementation of its constructor parameters.


Figure 10b Second part of WebAppBuilderExtension

GetHtml() below is how our pages display their content with javascript code handling button clicks and clock updates.


Figure 11.  GetHtml() code 

The decoupled nature of IOC / DI will allow for easy reuse of components as it is ultimately the container that can pick and chose its implementation for any of its interfaces.


Figure 12 - where the MiddleWare parameters are displayed

How to publish your own blog [SmarterAsp]

This blog is available on GitHub: BlogEngine.NET (Billkrat fork) 

Once you have the source code available you can publish it to a SmarterASP.NET host for as little as $2.95 a month (see add on bottom right); having your own blog doesn't have to be expensive nor hard to deploy/setup.

  1. Figure 1 Creating a new site in SmarterASP
  2. Figure 2 Show Deployment Information
  3. Figure 3 Get the Web Deploy publish information

    In Visual Studio
  4. Figure 4 Add a new profile and select "Import Profile"
  5. Figure 5 Point to the file you downloaded from SmarterASP
  6. Figure 6 Publish your site


Figure 1 Creating a new site in SmarterASP 


Figure 2 Show Deployment Information


Figure 3 Get the Web Deploy publish information


Figure 4 Add a new profile and select "Import Profile"


Figure 5 Point to the file you downloaded from SmarterASP


Figure 6 Publish your site