Learn From Industry Experts

Leverture Labs

Category
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Explore when low-code platforms shine—and when traditional development delivers better results—with a strategic framework for choosing the right approach for your business needs.

The software development landscape is evolving rapidly. As organizations face increasing pressure to digitize operations and deliver new applications at unprecedented speeds, many are turning to low-code and no-code platforms as potential solutions. These platforms promise to democratize development, enabling business users to create applications with minimal coding knowledge while allowing professional developers to accelerate their work.

But do these platforms deliver on their promise? And more importantly, when should your organization leverage them versus investing in traditional custom development? At Leverture, we've helped clients navigate this decision process across various industries, and we've developed a nuanced perspective on where low-code truly shines and where it falls short.

The Growing Low-Code/No-Code Market

The low-code/no-code (LCNC) market has experienced explosive growth in recent years. According to Gartner, the worldwide low-code development technologies market is projected to total $26.9 billion in 2023, an increase of 19.6% from 2022. This growth is driven by several factors:

Digital Transformation Acceleration

The pandemic dramatically accelerated digital transformation initiatives across industries. Organizations that once had multi-year digitization roadmaps suddenly found themselves needing to implement new digital solutions in weeks or months, not years. Low-code platforms helped bridge this gap by enabling faster application development.

Developer Shortage

The global shortage of skilled software developers continues to be a significant challenge. The U.S. Bureau of Labor Statistics projects that software developer jobs will grow 22% from 2020 to 2030, much faster than the average for all occupations. Low-code platforms help organizations extend their development capabilities beyond their professional development teams.

Increasing Business Demand for Applications

Modern businesses require more applications than ever before. Research from OutSystems found that 65% of IT leaders report application development backlogs, with more than 10 applications waiting in the queue at any given time. Low-code platforms help address this backlog by enabling faster development cycles.

Evolving Platform Capabilities

Low-code platforms have evolved considerably from their early days. Modern platforms now support sophisticated applications with complex business logic, integration capabilities, and responsive user interfaces. This evolution has expanded the types of applications that can be feasibly built on these platforms.

Popular Low-Code/No-Code Platforms: A Comparison

The low-code/no-code market has become increasingly crowded, with platforms specializing in different types of applications and use cases. Here's a comparative overview of some leading platforms:

Enterprise Application Platforms

Microsoft Power Platform

  • Strengths: Deep integration with Microsoft ecosystem, AI capabilities, robust security
  • Weaknesses: Complex pricing, steeper learning curve compared to some alternatives
  • Best For: Microsoft-centric organizations, enterprise-wide applications, process automation

Salesforce Platform (including Lightning)

  • Strengths: Powerful CRM integration, extensive marketplace of pre-built components
  • Weaknesses: Can be expensive, primarily focused on Salesforce-adjacent use cases
  • Best For: Organizations heavily invested in Salesforce, customer-facing applications

OutSystems

  • Strengths: Enterprise-grade security, scalability, sophisticated DevOps capabilities
  • Weaknesses: Higher cost, requires more technical knowledge than true "no-code" tools
  • Best For: Mission-critical enterprise applications, mobile app development

Mendix

  • Strengths: Multi-experience development, AI-assisted development, strong collaboration features
  • Weaknesses: Enterprise pricing can be high, requires platform-specific knowledge
  • Best For: Complex enterprise applications, multi-channel experiences

Specialized Platforms

Bubble

  • Strengths: Powerful web application builder, extensive plugin ecosystem
  • Weaknesses: Performance limitations for high-traffic applications, limited offline capabilities
  • Best For: Startups, MVPs, web applications with moderate complexity

Airtable

  • Strengths: Excellent database capabilities, intuitive interface, extensive integrations
  • Weaknesses: Limited for complex applications, primarily focused on data management
  • Best For: Operational applications, data collection and management, team collaboration

Zapier

  • Strengths: Extensive integration library, simple interface, quick implementation
  • Weaknesses: Limited application complexity, primarily focused on automation
  • Best For: Process automation, system integration, workflow optimization

Webflow

  • Strengths: Professional-quality websites, fine-grained design control, hosting included
  • Weaknesses: Limited for complex application logic, primarily focused on websites
  • Best For: Marketing websites, content-driven sites, portfolio sites

Choosing the Right Platform

When evaluating platforms, consider:

  1. Integration Requirements: How well does the platform connect with your existing systems?
  2. Complexity of Use Cases: Does the platform support the complexity your applications require?
  3. Developer Experience: How intuitive is the platform for your intended users?
  4. Governance and Security: Does the platform meet your compliance and security requirements?
  5. Scalability: Will the platform support your growth in users, data, and functionality?
  6. Total Cost of Ownership: Consider licensing, training, maintenance, and potential technical debt.

Appropriate Use Cases for Low-Code in the Enterprise

Low-code platforms excel in specific scenarios. Understanding these can help you leverage these tools where they provide the most value.

Operational Efficiency Applications

Low-code platforms are excellent for applications that digitize internal processes and workflows. Examples include:

  • Approval Workflows: Streamlining request and approval processes across departments
  • Employee Onboarding: Coordinating the multiple steps involved in bringing on new team members
  • Resource Management: Tracking and allocating resources across teams and projects
  • Reporting Dashboards: Creating visual representations of key performance indicators

These applications typically have well-defined requirements, moderate complexity, and primarily serve internal users—all characteristics that align well with low-code development.

Customer and Partner Portals

Organizations often need to provide customers, partners, or vendors with access to specific information and functionality. Low-code platforms can effectively power these experiences:

  • Customer Self-Service Portals: Enabling customers to check status, submit requests, or update information
  • Partner Collaboration Spaces: Facilitating joint work and information sharing with external partners
  • Vendor Management Portals: Streamlining interactions with suppliers and service providers

These applications benefit from low-code's ability to create user-friendly interfaces quickly while integrating with backend systems.

Departmental Applications

Specialized teams often need dedicated tools tailored to their specific needs. Examples include:

  • HR Self-Service Tools: Enabling employees to update information, request time off, or access benefits
  • Marketing Campaign Managers: Coordinating and tracking marketing initiatives across channels
  • Field Service Applications: Supporting mobile workers with information and data collection capabilities
  • Compliance Documentation Systems: Managing and tracking regulatory documentation requirements

These applications often fall into the "long tail" of IT demand—important to specific departments but not necessarily prioritized by central IT teams, making them perfect candidates for low-code development.

Rapid Prototyping and MVPs

Low-code platforms excel at quickly turning concepts into working applications:

  • Concept Validation: Testing new business ideas with minimal investment
  • User Experience Prototyping: Refining interfaces through quick iterations with users
  • Minimum Viable Products: Bringing basic versions of products to market to gather feedback

The speed of development that low-code enables makes these platforms particularly valuable in innovation contexts where rapid iteration is essential.

Data Collection and Management

Many business processes require structured data collection and management:

  • Surveys and Assessments: Gathering and analyzing feedback from customers or employees
  • Inspection and Audit Tools: Supporting field data collection with validation and reporting
  • Knowledge Management Systems: Organizing and accessing organizational knowledge

Low-code platforms typically offer strong form-building capabilities, making them well-suited for these use cases.

When Custom Development Makes More Sense

Despite their growing capabilities, low-code platforms aren't the right solution for every application. Here are scenarios where traditional custom development typically delivers better results:

Complex, Mission-Critical Systems

Applications that form the core of your business operations or competitive advantage often require the flexibility and optimization that custom development provides:

  • Core Banking Systems: Managing fundamental financial transactions and records
  • Advanced E-commerce Platforms: Supporting complex pricing, inventory, and fulfillment logic
  • Industrial Control Systems: Operating and monitoring manufacturing or production equipment
  • Healthcare Clinical Systems: Supporting medical diagnoses and treatment decisions

These systems often have complex requirements, integrate with numerous other systems, and need to be highly optimized for performance and reliability.

Systems with Unique Technical Requirements

Some applications have specialized technical needs that most low-code platforms struggle to address:

  • High-Performance Computing Applications: Systems requiring intensive computational capability
  • Real-Time Processing Systems: Applications with strict latency requirements
  • Advanced Algorithmic Solutions: Systems implementing complex proprietary algorithms
  • Specialized Hardware Integration: Applications interfacing directly with unique hardware

The technical constraints of low-code platforms can become limiting when working with these specialized requirements.

Highly Customized User Experiences

Applications where the user experience is a key differentiator often benefit from the design freedom of custom development:

  • Consumer-Facing Mobile Apps: Applications competing for user engagement in the app stores
  • Interactive Data Visualization Tools: Systems presenting complex data in uniquely intuitive ways
  • Immersive Customer Experiences: Applications creating distinctive branded experiences

While low-code platforms have improved their UI capabilities, they still impose more constraints than custom development.

Systems Requiring Deep Integration

Applications that need to integrate deeply with multiple complex systems often benefit from custom development:

  • Enterprise-Wide Data Hubs: Centralizing and reconciling data from numerous source systems
  • Cross-Platform Synchronization Systems: Maintaining consistency across diverse platforms
  • Legacy System Modernization: Creating modern interfaces for older systems while preserving functionality

The complexity of these integration challenges often exceeds the capabilities of low-code integration tools.

Applications with Unpredictable Scaling Requirements

Systems that may need to scale dramatically and unpredictably often require the optimization possible with custom development:

  • Viral Consumer Applications: Systems that could experience explosive growth
  • High-Volume Transaction Systems: Applications processing large numbers of concurrent operations
  • Data-Intensive Analytics Platforms: Systems working with large datasets requiring optimized processing

While many low-code platforms offer cloud scalability, they may not provide the performance optimization options available in custom development.

Finding the Right Balance: A Hybrid Approach

Most organizations will benefit from a balanced approach that leverages both low-code and custom development where each is most appropriate. Here's a framework for determining the right approach for a specific application:

Assessment Criteria

Evaluate each application against these criteria:

  1. Strategic Importance: How central is this application to your competitive advantage?
  2. Technical Complexity: How complex are the system's requirements and integrations?
  3. User Experience Requirements: How important is a highly customized user experience?
  4. Time Sensitivity: How quickly does the application need to be deployed?
  5. Available Resources: What development resources are available for this project?
  6. Long-term Maintainability: Who will maintain the application over time?
  7. Scalability Requirements: How might usage and data volume grow over time?
  8. Budget Constraints: What financial resources are available for development and maintenance?

Decision Matrix

Based on your assessment, you can use this simplified decision matrix as a starting point:

FactorFavors Low-CodeFavors Custom DevelopmentStrategic ValueOperational/SupportingCore/DifferentiatingUser BaseInternalCustomer-FacingComplexityLow to ModerateHigh to Very HighTimelineUrgentFlexibleMaintenanceBusiness Users/AnalystsIT/Development TeamIntegrationModerate (Standard APIs)Deep/ComplexCustomizationModerateExtensiveScalabilityPredictableUnpredictable/Massive

Hybrid Scenarios

Many successful implementations combine elements of both approaches:

  • Low-Code Frontend with Custom Backend: Using low-code to create user interfaces that connect to custom-developed APIs and services
  • Low-Code for Rapid Deployment, Custom for Scale: Starting with low-code to quickly launch an MVP, then selectively rebuilding high-load components with custom code
  • Custom Core with Low-Code Extensions: Building core functionality with custom development while enabling business users to extend the platform with low-code tools
  • Low-Code Business Logic with Custom Integration Layer: Using low-code for business rules and workflows while implementing custom integration components

Real-World Success Story: Financial Services Firm

A mid-size financial services firm needed to modernize their client onboarding process, which was largely manual and paper-based. After assessing their requirements, they adopted a hybrid approach:

The Challenge

  • Lengthy onboarding process (average 3 weeks)
  • High error rates in paperwork
  • Poor visibility into process status
  • Compliance risks from inconsistent documentation

The Solution

The firm implemented a hybrid approach:

  • Low-Code Portal (Microsoft Power Apps): Client-facing portal for information collection and status tracking
  • Custom Integration Layer: Purpose-built middleware connecting the portal to core banking systems
  • Low-Code Workflow (Power Automate): Internal approval and review processes
  • Custom Document Processing: Specialized OCR and document verification for high-volume processing

The Results

  • Reduced onboarding time from 3 weeks to 5 days
  • Decreased error rates by 78%
  • Improved client satisfaction scores by 45%
  • Enhanced compliance through consistent documentation
  • Achieved positive ROI within 9 months

The hybrid approach allowed them to:

  • Move Quickly: The low-code components were deployed within 8 weeks
  • Address Complexity: Custom components handled the most technically challenging aspects
  • Enable Business Ownership: Business teams could directly maintain and update workflow rules
  • Optimize Costs: Development resources focused on the highest-complexity components

Mitigating Risks in Low-Code Implementation

To maximize success with low-code platforms, consider these risk mitigation strategies:

Governance and Standards

Establish clear governance for low-code development:

  • Defined approval processes for new applications
  • Standards for security and data handling
  • Guidelines for integration with enterprise systems
  • Documentation requirements

Center of Excellence

Create a dedicated team to support low-code initiatives:

  • Provide platform expertise and best practices
  • Review applications for quality and compliance
  • Manage reusable components and templates
  • Coordinate training and knowledge sharing

Developer Collaboration

Foster collaboration between professional developers and citizen developers:

  • Professional review of citizen-developed applications
  • Pairing for complex problems
  • Creation of reusable components by professional developers
  • Clear escalation paths for technical challenges

Platform Evaluation

Regularly reassess your platform choices:

  • Monitor platform roadmaps and updates
  • Evaluate performance against changing requirements
  • Assess total cost of ownership regularly
  • Consider multi-platform strategies for different use cases

Conclusion: Strategic Application of Low-Code Development

Low-code platforms have earned their place in the modern enterprise technology stack. They enable faster development, broader participation in the development process, and more responsive iteration based on user feedback. However, they are not a universal replacement for custom development.

The most successful organizations take a strategic approach to low-code adoption:

  1. Use Low-Code Where It Excels: Operational applications, departmental tools, and rapid prototypes
  2. Use Custom Development for Core Systems: Mission-critical applications, highly specialized systems, and key competitive differentiators
  3. Adopt Hybrid Approaches Where Appropriate: Combining the speed of low-code with the flexibility of custom development
  4. Establish Strong Governance: Ensuring low-code development aligns with enterprise standards
  5. Continuously Evaluate and Adjust: Regularly assessing the effectiveness of your approach as technology and business needs evolve

At Leverture, we help clients navigate these decisions, implementing both low-code and custom solutions based on their specific business contexts. Our experience across platforms allows us to recommend the right approach for each unique situation.

Whether you're just beginning to explore low-code development or seeking to optimize an existing strategy, a thoughtful assessment of where these platforms truly add value will help you maximize your return on investment while avoiding potential pitfalls.

Ready to discuss the right development approach for your business needs? Contact Leverture today for a consultation with our experienced solution architects.

A comprehensive guide for mid-market companies ready to implement AI solutions that deliver real business value, featuring proven use cases, practical strategies, and a detailed implementation case study.

Artificial Intelligence is no longer the exclusive domain of tech giants and Fortune 500 companies. Mid-market organizations are increasingly recognizing AI's potential to drive operational efficiency, enhance customer experiences, and create competitive advantages. However, the journey from AI curiosity to meaningful implementation requires strategic thinking, realistic expectations, and a clear understanding of what works in practice.

At Leverture, we've guided numerous mid-market companies through successful AI implementations, helping them navigate the complex landscape of AI technologies while delivering measurable business value. This guide distills our experience into actionable insights for organizations ready to harness AI's transformative potential.

The Mid-Market AI Opportunity

Mid-market companies occupy a unique position in the AI landscape. Unlike large enterprises, they often lack extensive data science teams and unlimited budgets. However, they possess advantages that can accelerate AI adoption: nimble decision-making, focused use cases, and the ability to implement solutions quickly across their entire organization.

The key is approaching AI implementation strategically, focusing on proven technologies that address specific business challenges rather than pursuing AI for its own sake. Successful mid-market AI initiatives share common characteristics: they solve real problems, deliver measurable ROI, and integrate seamlessly with existing workflows.

Realistic AI Use Cases That Deliver ROI

The most successful AI implementations in mid-market companies focus on practical applications that enhance existing operations rather than completely reimagining business processes. Based on our experience, here are proven use cases that consistently deliver return on investment:

Customer Service Enhancement

AI-powered customer service improvements often provide the quickest path to measurable ROI. These solutions can handle routine inquiries, escalate complex issues to human agents, and provide 24/7 support capabilities.

Practical Applications:

  • Intelligent Chatbots: Handle common customer questions, process simple requests, and gather initial information before human handoff
  • Sentiment Analysis: Monitor customer communications to identify satisfaction trends and flag potential issues
  • Automated Ticket Routing: Direct customer inquiries to the most appropriate team member based on content analysis

Expected ROI Timeline: 3-6 months, with typical cost savings of 20-30% in customer service operations while improving response times.

Predictive Maintenance and Operations

For companies with physical assets or equipment, AI-driven predictive maintenance can significantly reduce downtime and maintenance costs while extending equipment life.

Practical Applications:

  • Equipment Monitoring: Analyze sensor data to predict when maintenance is needed
  • Inventory Optimization: Predict demand patterns to optimize stock levels and reduce carrying costs
  • Quality Control: Use computer vision to identify defects or anomalies in products or processes

Expected ROI Timeline: 6-12 months, with maintenance cost reductions of 15-25% and significant decreases in unplanned downtime.

Sales and Marketing Intelligence

AI can enhance sales and marketing effectiveness by providing insights into customer behavior, optimizing pricing strategies, and improving lead qualification processes.

Practical Applications:

  • Lead Scoring: Automatically rank prospects based on likelihood to convert
  • Dynamic Pricing: Optimize pricing based on market conditions, inventory levels, and customer segments
  • Customer Segmentation: Identify distinct customer groups for targeted marketing campaigns
  • Churn Prediction: Identify customers at risk of leaving to enable proactive retention efforts

Expected ROI Timeline: 4-8 months, with typical improvements of 10-20% in sales conversion rates and 15-30% increases in marketing campaign effectiveness.

Document and Data Processing

Many mid-market companies still handle significant amounts of paperwork and manual data entry. AI can automate these processes, reducing errors and freeing staff for higher-value activities.

Practical Applications:

  • Intelligent Document Processing: Extract and categorize information from invoices, contracts, and forms
  • Data Entry Automation: Automatically populate systems from scanned documents or emails
  • Compliance Monitoring: Automatically review documents for regulatory compliance issues

Expected ROI Timeline: 2-4 months, with processing time reductions of 50-80% and error rate decreases of 60-90%.

Financial Analysis and Forecasting

AI can enhance financial planning and analysis by identifying patterns in financial data, improving forecasting accuracy, and automating routine financial processes.

Practical Applications:

  • Cash Flow Forecasting: Predict future cash positions based on historical patterns and current trends
  • Expense Analysis: Identify unusual spending patterns and potential cost-saving opportunities
  • Risk Assessment: Evaluate credit risk for customers or investment opportunities

Expected ROI Timeline: 6-9 months, with improvements in forecasting accuracy of 15-25% and reductions in manual analysis time of 40-60%.

Starting Small with Proven Technologies

One of the most critical success factors for mid-market AI implementation is starting with proven, accessible technologies rather than attempting to build cutting-edge solutions from scratch. This approach reduces risk, accelerates time-to-value, and builds organizational confidence in AI capabilities.

Cloud-Based AI Services

Major cloud providers offer pre-built AI services that can be integrated into existing applications without requiring extensive machine learning expertise. These services are cost-effective, scalable, and backed by the extensive research and development of technology giants.

Recommended Starting Points:

  • Natural Language Processing: Microsoft Azure Cognitive Services, Google Cloud Natural Language API, AWS Comprehend
  • Computer Vision: Azure Computer Vision, Google Cloud Vision API, AWS Rekognition
  • Speech Services: Azure Speech Services, Google Speech-to-Text, AWS Transcribe
  • Translation Services: Azure Translator, Google Translate API, AWS Translate

Industry-Specific AI Platforms

Many software vendors now offer AI-enhanced versions of industry-specific applications, providing a lower-risk path to AI adoption within familiar software environments.

Examples by Industry:

  • Manufacturing: Predictive maintenance modules in ERP systems
  • Healthcare: AI-enhanced practice management and patient engagement platforms
  • Financial Services: Fraud detection and risk assessment tools
  • Retail: Inventory optimization and customer analytics platforms

No-Code and Low-Code AI Tools

The emergence of no-code and low-code AI platforms enables business users to create AI-powered solutions without extensive programming knowledge, democratizing AI development within organizations.

Popular Platforms:

  • Microsoft Power Platform: AI Builder for creating custom AI models
  • Google AutoML: User-friendly machine learning model creation
  • IBM Watson Studio: Visual model building and deployment
  • H2O.ai: Automated machine learning platform

Pilot Project Strategy

Successful AI implementation typically follows a structured pilot approach that validates both technical feasibility and business value before scaling solutions organization-wide.

Pilot Project Criteria:

  1. Clear Success Metrics: Define specific, measurable outcomes that indicate project success
  2. Limited Scope: Focus on a single process or department to minimize complexity
  3. Available Data: Ensure sufficient, quality data exists to train and validate AI models
  4. Stakeholder Buy-in: Secure support from both technical teams and business users
  5. Timeline Constraints: Aim for pilots that can demonstrate value within 90 days

Build vs. Buy Decisions for AI Capabilities

One of the most critical decisions mid-market companies face is whether to build custom AI solutions or purchase existing products. This decision significantly impacts implementation timelines, costs, and long-term success.

When to Buy AI Solutions

Purchasing existing AI solutions is typically the right choice for mid-market companies in most scenarios. Commercial AI products offer proven functionality, ongoing support, and faster implementation timelines.

Buy When:

  • Standard Use Cases: The AI application addresses common business needs that many companies share
  • Limited AI Expertise: Your organization lacks deep machine learning knowledge and resources
  • Time Sensitivity: You need to implement AI capabilities quickly to remain competitive
  • Proven Solutions Exist: Commercial products already address your specific industry or functional needs
  • Ongoing Support Requirements: You prefer vendor-managed updates, maintenance, and technical support

Recommended Purchase Scenarios:

  • Customer service chatbots and virtual assistants
  • Document processing and data extraction tools
  • Marketing automation and customer analytics platforms
  • Cybersecurity threat detection systems
  • Financial fraud detection and risk management tools

When to Build Custom AI Solutions

Custom AI development makes sense when your organization has unique requirements that cannot be met by existing solutions, or when AI capabilities are central to your competitive advantage.

Build When:

  • Unique Business Processes: Your workflows or requirements are significantly different from industry standards
  • Proprietary Data Advantage: You possess unique datasets that could create competitive advantages
  • Integration Complexity: Existing solutions cannot integrate effectively with your current systems
  • Strategic Differentiation: AI capabilities are central to your business model and competitive positioning
  • Technical Resources Available: You have or can access the necessary AI development expertise

Recommended Build Scenarios:

  • Highly specialized industrial process optimization
  • Custom recommendation engines for unique product catalogs
  • Proprietary risk assessment models for specialized industries
  • AI-powered features that differentiate your product offerings

Hybrid Approaches

Many successful AI implementations combine purchased solutions with custom development, leveraging the strengths of both approaches.

Effective Hybrid Strategies:

  • Foundation Plus Customization: Start with commercial platforms and add custom components for unique requirements
  • API Integration: Use commercial AI services as components within custom applications
  • Phased Approach: Begin with purchased solutions and gradually replace with custom development as expertise grows
  • Vendor Partnerships: Collaborate with AI vendors to customize their solutions for your specific needs

Evaluation Framework

To make informed build vs. buy decisions, evaluate potential solutions across multiple dimensions:

FactorBuy IndicatorBuild IndicatorFunctionality Match80%+ requirement coverageUnique requirementsTimelineImmediate needFlexible timelineBudgetLimited development budgetSignificant development resourcesExpertiseLimited AI knowledgeStrong technical teamStrategic ValueSupporting functionCore differentiatorData SensitivityStandard security needsHighly sensitive dataIntegrationStandard APIs availableComplex integration needsLong-term ControlVendor dependency acceptableFull control required

AI Implementation Scenario: Mid-Market Manufacturing Company

To illustrate how AI principles can be applied in practice, let's explore a potential implementation scenario for a mid-sized manufacturing company looking to transform their operations and achieve significant business value.

Company Profile

A mid-market precision manufacturing company with:

  • 150 employees across two facilities
  • Annual revenue of $45 million
  • Traditional manufacturing processes with minimal automation
  • Operational challenges in quality control, maintenance scheduling, and inventory management

Current Operational Challenges

The company faces several operational challenges that impact profitability and customer satisfaction:

Reactive Maintenance: Equipment failures result in costly unplanned downtime that could be prevented with better predictive capabilities.

Inconsistent Quality: Manual quality control processes occasionally allow defects to reach customers, affecting reputation and requiring costly rework.

Inventory Inefficiencies: Excess inventory ties up working capital while stockouts delay production and disappoint customers.

Manual Data Collection: Paper-based processes limit visibility into operations and prevent data-driven decision making.

Proposed AI Implementation Strategy

Rather than attempting a comprehensive digital transformation, the company could adopt a phased approach focusing on high-impact, low-risk AI applications.

Phase 1: Predictive Maintenance (Months 1-4)

The first phase would focus on implementing predictive maintenance using existing sensor data and a commercial IoT platform with built-in AI capabilities.

Proposed Solution Components:

  • Industrial IoT sensors on critical equipment
  • Microsoft Azure IoT platform with AI-powered analytics
  • Custom dashboard for maintenance team
  • Integration with existing maintenance management system

Implementation Approach:

  • Pilot Equipment Selection: Start with three critical machines representing different equipment types
  • Data Collection: Install vibration, temperature, and current sensors
  • Model Training: Use Azure Machine Learning to develop predictive models
  • Integration: Connect predictions to existing work order system
  • Training: Educate maintenance team on interpreting and acting on AI insights

Potential Benefits:

  • Significant reduction in unplanned downtime
  • Decreased maintenance costs through optimized scheduling
  • Rapid ROI within the first year
  • Improved team buy-in through demonstrated value

Phase 2: Computer Vision Quality Control (Months 5-8)

Building on predictive maintenance success, the company could implement AI-powered visual inspection to enhance quality control processes.

Proposed Solution Components:

  • High-resolution cameras at key inspection points
  • Custom computer vision model trained on defect examples
  • Real-time alerts for quality issues
  • Integration with quality management system

Implementation Approach:

  • Image Collection: Gather thousands of images showing acceptable and defective products
  • Model Development: Partner with AI specialists to develop custom computer vision models
  • Production Integration: Install cameras and processing equipment at inspection stations
  • Validation: Run parallel operations with manual inspection to validate accuracy
  • Deployment: Gradual transition to AI-primary inspection with human oversight

Potential Benefits:

  • Improved accuracy in defect detection compared to manual inspection
  • Substantial reduction in customer quality complaints
  • Increased inspection throughput
  • Enhanced customer satisfaction scores

Phase 3: Demand Forecasting and Inventory Optimization (Months 9-12)

The final phase would address inventory management through AI-powered demand forecasting.

Proposed Solution Components:

  • Integration with ERP and CRM systems
  • Machine learning models for demand prediction
  • Automated inventory optimization recommendations
  • Exception reporting for unusual demand patterns

Implementation Approach:

  • Data Integration: Connect sales, inventory, and customer data sources
  • Model Development: Build forecasting models incorporating seasonal patterns and customer behavior
  • Testing: Validate forecasts against historical data and actual results
  • Automation: Integrate recommendations into purchasing workflows
  • Monitoring: Establish KPIs to track forecasting accuracy and inventory performance

Potential Benefits:

  • Significant reduction in excess inventory
  • Dramatic decrease in stockout incidents
  • Improved cash flow through optimized inventory levels
  • Enhanced customer delivery performance

Critical Success Factors

Several factors would be essential for successful AI implementation:

Executive Sponsorship: Strong leadership support ensures adequate resources and organizational commitment throughout the implementation process.

Phased Approach: Starting small allows the organization to build expertise and confidence gradually while minimizing risk.

Change Management: Extensive training and communication helps employees embrace AI tools rather than fear job displacement.

Strategic Partnership: Working with AI specialists provides access to expertise without the need to build an internal team immediately.

Focus on ROI: Each phase should deliver measurable business value to justify continued investment and maintain organizational support.

Integration with Existing Systems: AI solutions should enhance rather than replace existing workflows to minimize disruption and maximize adoption.

Implementation Considerations

For mid-market companies considering similar AI implementations:

  • Data Quality: Significant effort may be required to clean and prepare historical data for AI models
  • User Adoption: Success depends as much on change management as technical implementation
  • High-Impact Applications: Starting with predictive maintenance or quality control can provide quick wins that build momentum
  • External Expertise: Partnerships can significantly reduce implementation time and risk
  • Continuous Improvement: AI models require ongoing refinement and monitoring to maintain effectiveness

This scenario demonstrates how a structured, phased approach to AI implementation can help mid-market manufacturers overcome operational challenges while building organizational capability and confidence in AI technologies.

Conclusion: Your AI Journey Starts Now

Artificial Intelligence is no longer a future possibility for mid-market companies—it's a present opportunity that can drive significant competitive advantages. The key to success lies not in pursuing the most advanced AI technologies, but in strategically implementing proven solutions that address real business challenges and deliver measurable value.

At Leverture, we've seen firsthand how mid-market companies can successfully harness AI to transform their operations, enhance customer experiences, and achieve sustainable growth. The most successful implementations follow a structured approach: starting with realistic use cases, leveraging proven technologies, making informed build vs. buy decisions, and maintaining a relentless focus on business outcomes.

Your AI journey doesn't require a massive upfront investment or a complete organizational overhaul. Instead, it begins with identifying a specific business challenge that AI can address, selecting appropriate technologies, and implementing a solution that integrates seamlessly with your existing operations. Success breeds success, and early wins will build the foundation for more ambitious AI initiatives.

The competitive landscape is evolving rapidly, and companies that successfully implement AI will gain significant advantages over those that delay. However, rushing into AI without proper planning and strategy can lead to costly failures and organizational skepticism about AI's potential.

Whether you're just beginning to explore AI possibilities or ready to implement your first AI solution, partnering with experienced professionals can significantly increase your likelihood of success while reducing implementation risks and timeframes.

Ready to begin your AI transformation journey? Contact Leverture today to discuss how we can help you identify high-impact AI opportunities and develop a strategic implementation roadmap tailored to your organization's unique needs and objectives.

Navigate the critical decision between monolithic and microservices architectures with a comprehensive framework that balances organizational readiness, application complexity, and business priorities.

As businesses continually seek to evolve their digital capabilities, one of the most significant architectural decisions they face is whether to build applications as monoliths or microservices. This decision impacts everything from development speed and team organization to operational costs and future flexibility.

At Leverture, we've guided numerous organizations through this critical architectural crossroads. In this follow-up to our article on implementing impersonation with AUTH0, we'll explore how to make this decision strategically rather than following industry trends blindly.

Understanding the Architectural Paradigms

Before diving into the decision-making framework, let's clarify what we mean by monolithic and microservices architectures.

The Monolithic Architecture

A monolithic application is built as a single, unified unit. Typically, a monolith consists of:

  • A client-side user interface
  • A server-side application
  • A database

All functions of the application—from handling HTTP requests and executing business logic to database operations and communicating with external systems—exist within a single codebase and runtime process.

The Microservices Architecture

In contrast, a microservices architecture breaks an application into smaller, independent services that:

  • Focus on specific business capabilities
  • Run in their own processes
  • Communicate through well-defined APIs
  • Can be deployed independently
  • Often have their own dedicated databases or data storage

As we highlighted in our AUTH0 impersonation article, one of the significant challenges we faced was reimplementing user impersonation when transitioning from a monolithic legacy application to a microservices architecture. This example illustrates just one of many considerations when migrating between architectural paradigms.

Comparative Analysis: Advantages and Disadvantages

Monolithic Architecture

Advantages:

  1. Simplicity in Development: With everything in one codebase, development tools and workflows are straightforward.
  2. Easier Testing: End-to-end testing is simpler when all components are in one application.
  3. Simplified Deployment: Only one application needs to be deployed, with fewer operational concerns.
  4. Lower Initial Complexity: Network latency, message failures, and versioning issues are less prevalent concerns.
  5. Shared Memory Access: Components can interact directly, avoiding network overhead.
  6. Startup Efficiency: Generally faster to initialize than multiple interconnected services.

Disadvantages:

  1. Scale Limitations: The entire application must scale together, even if only one component requires additional resources.
  2. Technology Lock-in: Changing frameworks or languages requires rewriting the entire application.
  3. Complexity Growth: As the application grows, the codebase becomes harder to understand and modify.
  4. Continuous Deployment Challenges: Even small changes require deploying the entire application.
  5. Reliability Concerns: A bug in any component can potentially crash the entire system.
  6. Team Coordination Overhead: Multiple teams working on the same codebase require significant coordination.

Microservices Architecture

Advantages:

  1. Independent Scaling: Each service can be scaled according to its specific resource needs.
  2. Technology Diversity: Teams can select the best tools for each service's requirements.
  3. Resilience: Failures in one service are less likely to bring down the entire system.
  4. Deployment Flexibility: Services can be updated independently, enabling more frequent releases.
  5. Team Autonomy: Smaller teams can own specific services, reducing coordination overhead.
  6. Improved Fault Isolation: Issues can be isolated to specific services, limiting impact.
  7. Better Alignment with Business Capabilities: Services often map more directly to business domains.

Disadvantages:

  1. Distributed System Complexity: Managing service discovery, network communication, and partial failures.
  2. Data Consistency Challenges: Maintaining consistency across service boundaries requires careful design.
  3. Testing Complexity: Integration and end-to-end testing is more difficult across service boundaries.
  4. Operational Overhead: Managing multiple services requires sophisticated deployment pipelines and monitoring.
  5. Network Latency: Inter-service communication adds latency compared to in-process calls.
  6. Transaction Management: Distributed transactions are notoriously difficult to implement correctly.
  7. Development Environment Complexity: Local development may require running multiple services simultaneously.

A Strategic Decision Framework

Rather than following industry trends, your architecture choice should be guided by your specific business context. Here's a framework to help navigate this decision:

1. Assess Your Organizational Readiness

Consider your team's capabilities and organizational structure:

  • Team Size and Structure: Microservices thrive with multiple small teams working autonomously. Small organizations may struggle to staff multiple specialized teams.
  • DevOps Maturity: Microservices require sophisticated deployment, monitoring, and infrastructure automation capabilities.
  • Existing Expertise: Transitioning to microservices often requires new skills in distributed systems design, API development, and container orchestration.

Key Question: Does your organization have the size, structure, and technical expertise to effectively manage a distributed system?

2. Evaluate Application Complexity and Scale

Not all applications benefit equally from microservices:

  • Application Size: Smaller applications with limited functionality may not justify the overhead of microservices.
  • Scaling Requirements: Applications with varying load profiles across different functions benefit more from microservices' independent scaling.
  • Growth Trajectory: Applications expected to grow significantly in complexity and scale over time may warrant microservices' flexibility.

Key Question: Does your application have the scale, complexity, and growth trajectory to justify the additional overhead of microservices?

3. Consider Business Requirements

Business priorities should heavily influence your architecture choice:

  • Time to Market: Monoliths often enable faster initial delivery for new applications.
  • Competitive Differentiation: Parts of your application that provide unique value might benefit from the flexibility of microservices.
  • Change Frequency: Components requiring frequent updates are good candidates for microservices.
  • Availability Requirements: Critical systems requiring high availability may benefit from the isolation provided by microservices.

Key Question: What business priorities (speed, flexibility, reliability) are most important for your application's success?

4. Analyze Technical Requirements

Some technical considerations naturally point toward one architecture or the other:

  • Performance Sensitivity: Applications where microseconds matter may struggle with the network overhead of microservices.
  • Data Consistency Requirements: Systems requiring strong transactional consistency are often easier to implement as monoliths.
  • Independent Scalability Needs: Components with vastly different resource requirements benefit from microservices' granular scaling.
  • Polyglot Requirements: The need to use different programming languages or frameworks for different components points toward microservices.

Key Question: Do your technical requirements include any factors that strongly favor one architecture over the other?

The Hybrid Approach: A Pragmatic Middle Ground

Many successful organizations adopt a hybrid approach that combines elements of both architectural styles:

  • Modular Monoliths: Well-structured monoliths with clear internal boundaries that could eventually evolve into microservices.
  • Domain-Driven Decomposition: Breaking out specific bounded contexts as microservices while keeping others together.
  • Strangler Fig Pattern: Gradually migrating functionality from a monolith to microservices over time.

This approach allows organizations to capture some microservices benefits while managing complexity growth.

Case Study: Financial Services Platform Migration

To illustrate these principles in action, let's examine how a mid-sized financial services company successfully navigated their transition from a monolith to microservices.

Company Background

  • 15-year-old wealth management platform
  • 500,000+ users
  • Monolithic application with 1.5 million lines of code
  • Growing maintenance challenges and difficulty adding new features

The Challenge

The company's monolithic platform was becoming increasingly difficult to maintain and enhance. New feature development had slowed dramatically, and the company was struggling to respond to market changes quickly. Additionally, the application experienced performance issues during peak usage periods, affecting customer satisfaction.

As detailed in our AUTH0 impersonation article, the company also needed to ensure that critical functionality—like customer service representatives being able to impersonate users to troubleshoot issues—would continue to work in the new architecture.

The Assessment Process

The company applied a systematic evaluation process:

  1. Organizational Assessment:
    • Multiple development teams already in place
    • Strong engineering leadership
    • Emerging DevOps capabilities, but requiring investment
  2. Application Assessment:
    • Large, complex application with multiple distinct domains
    • Uneven scaling needs across different functions
    • Clear boundaries between some business capabilities
  3. Business Requirements:
    • Need for faster feature delivery
    • Desire to move toward continuous deployment
    • Requirements for improved reliability
  4. Technical Considerations:
    • Different components had different technology requirements
    • Database scaling becoming problematic
    • Some functions required real-time performance, others didn't

The Migration Strategy

Based on their assessment, the company adopted a hybrid approach with these key elements:

  1. Domain-Driven Decomposition: They identified bounded contexts within their application, defining clear service boundaries aligned with business capabilities.
  2. Strangler Fig Implementation: Rather than a "big bang" rewrite, they gradually migrated functionality from the monolith to microservices.
  3. Data Migration Strategy: They employed a combination of database-per-service for new services while maintaining careful synchronization with legacy data.
  4. Authentication and Authorization: As detailed in our AUTH0 article, they implemented a centralized identity service using AUTH0, including custom solutions for features like user impersonation.
  5. API Gateway Pattern: They introduced an API gateway to route requests, handle cross-cutting concerns, and provide a unified entry point.

Implementation Challenges and Solutions

The migration wasn't without challenges:

  1. User Impersonation: As detailed in our previous article, they had to implement custom solutions to replace functionality that wasn't available in their new authentication system.
  2. Distributed Transactions: For workflows spanning multiple services, they implemented a saga pattern with compensating transactions.
  3. Team Restructuring: They reorganized teams around business capabilities rather than technical layers, requiring cultural and organizational changes.
  4. Operational Complexity: They invested heavily in monitoring, tracing, and centralized logging to maintain visibility across services.

Results

After 18 months of phased implementation:

  • Deployment frequency increased from monthly to daily releases
  • Development velocity improved by 40%
  • System stability improved, with 99.99% uptime (up from 99.9%)
  • Peak load handling improved without proportional cost increases
  • Teams reported higher autonomy and ownership

Most importantly, the business gained the ability to respond to market changes more quickly, launching several competitive features that would have been challenging under the previous architecture.

Making Your Decision: A Balanced Approach

As this case study illustrates, the monolith vs. microservices decision isn't binary. Consider these final recommendations:

  1. Start with Business Objectives: Let your business needs drive technical decisions, not vice versa.
  2. Be Honest About Readiness: Assess your organization's true capabilities and readiness for distributed systems.
  3. Consider Incremental Approaches: A well-designed monolith can evolve toward microservices over time.
  4. Pilot in Non-Critical Areas: Test microservices approaches in less critical systems before committing critical functionality.
  5. Invest in Infrastructure: Success with microservices requires significant investment in automation, monitoring, and operational tooling.
  6. Plan for Cross-Cutting Concerns: Authentication, logging, monitoring, and other cross-cutting concerns need special attention in a microservices world.

At Leverture, we help clients navigate these complex architectural decisions, providing expertise in both monolithic and microservices implementations. Our experience with challenges like the AUTH0 impersonation issue mentioned earlier gives us practical insight into the real-world complexities of architecture migrations.

Whether you're building a new application or considering modernizing an existing one, a thoughtful approach to architecture selection pays dividends in long-term maintainability, scalability, and business agility.

Ready to discuss the right architecture for your business needs? Contact Leverture today for a consultation with our experienced solution architects.

Discover essential strategies for developing high-performance, secure, and future-ready RESTful APIs that scale seamlessly to meet enterprise demands in 2025 and beyond.

In today's interconnected digital ecosystem, APIs are the backbone that enables seamless integration between applications, services, and platforms.

As we navigate through 2025, the demand for high-performance, scalable, and secure APIs continues to grow exponentially. Organizations building new systems or evolving existing ones face mounting pressure to develop APIs that not only function correctly today but can scale to meet tomorrow's demands.

At Leverture, we've guided numerous clients through the process of designing, building, and maintaining scalable RESTful APIs.

In this article, we'll share the essential best practices that have emerged as critical success factors for modern API development in 2025.

Modern API Design Principles

1. Embrace API-First Development

The API-first approach has moved from being a recommendation to a necessity. By designing your APIs before implementing the underlying systems, you ensure that:

  • Integration capabilities are baked into your architecture from the start
  • Teams can work in parallel once the API contracts are defined
  • Documentation and testing can begin early in the development process

Modern API-first development leverages OpenAPI (formerly Swagger) specifications as living documentation. These specifications serve not just as documentation but as the foundation for automated code generation, mocking, and testing.

2. Resource-Oriented Design

Despite numerous architectural paradigms emerging in recent years, RESTful resource-oriented design remains the foundation of effective API development. When designing your resources:

  • Model your API around business entities rather than operations
  • Use nouns, not verbs, in endpoint paths (e.g., /users not /getUsers)
  • Apply consistent naming conventions across all endpoints
  • Leverage HTTP methods semantically (GET, POST, PUT, PATCH, DELETE)

A well-designed resource model should feel intuitive and aligned with the business domain rather than with your internal implementation details.

3. Implement Thoughtful Versioning

API versioning continues to be critical for ensuring backward compatibility while enabling innovation. In 2025, we recommend:

  • Using URI path versioning for most applications (e.g., /v1/users)
  • Providing at least 12 months of support for previous API versions
  • Implementing smart deprecation with detailed developer notifications
  • Using feature flags for granular control over new functionality

The goal is to balance innovation with stability—allowing your API to evolve without disrupting existing integrations.

4. Design for Consumption Patterns

Modern APIs must adapt to diverse consumption patterns:

  • Implement query parameters for filtering, sorting, and pagination
  • Support field selection to minimize payload size (e.g., ?fields=name,email)
  • Consider offering both synchronous and asynchronous interaction patterns
  • Design bulk operations for handling multiple resources efficiently

Understanding how developers will use your API in real-world scenarios is crucial for designing endpoints that truly meet their needs.

5. Adopt a Consistent Response Structure

Consistency in response structure enhances developer experience:

{
  "data": {
    "id": "123",
    "type": "user",
    "attributes": {
      "name": "Jane Smith",
      "email": "jane@example.com"
    },
    "relationships": {
      "orders": {
        "links": {
          "related": "/users/123/orders"
        }
      }
    }
  },
  "meta": {
    "requestId": "a1b2c3d4",
    "timestamp": "2025-05-15T14:33:22Z"
  }
}

This consistency applies not just to data-bearing responses but also to error responses, which should provide actionable information for troubleshooting.

Security Considerations for Modern APIs

1. Authentication & Authorization Best Practices

Building on our experience with Auth0 implementation (detailed in our previous article on implementing impersonation with AUTH0), modern API security requires sophisticated authentication and authorization:

  • Implement OAuth 2.1 or newer with proper scopes for fine-grained access control
  • Use stateless JWT tokens with appropriate expiration times
  • Enforce token rotation and implement refresh token mechanisms
  • Consider implementing Dynamic Client Registration for trusted partners

For enterprise applications, identity federation across multiple identity providers has become standard in 2025, requiring thoughtful integration design.

2. Zero Trust Architecture Integration

APIs in 2025 must align with zero trust security principles:

  • Verify every request regardless of source (internal or external)
  • Implement strict context-aware access controls
  • Use mutual TLS (mTLS) for service-to-service communication
  • Monitor and analyze all API access patterns for anomaly detection

As perimeter-based security continues to erode, embedding zero trust principles within your API architecture has become essential.

3. Data Protection & Privacy

With data privacy regulations continuing to evolve globally:

  • Implement field-level encryption for sensitive data
  • Design data residency controls for multi-region deployments
  • Build in robust consent management capabilities
  • Provide transparency mechanisms for data access and processing

The ability to enforce data protection policies programmatically through your API layer has become a key competitive advantage.

4. API Threat Protection

Protection against common API threats requires multiple defense layers:

  • Deploy web application firewalls (WAFs) specifically configured for API protection
  • Implement rate limiting with intelligent throttling policies
  • Use anomaly detection to identify potential attacks
  • Protect against injection attacks, especially in filtering parameters

The OWASP API Security Top 10 should be required reading for all API developers, with protection measures integrated into your CI/CD pipeline.

Performance Optimization Techniques

1. Efficient Data Handling

Data efficiency is paramount for scalable APIs:

  • Implement pagination for all collection endpoints
  • Support sparse fieldsets to reduce response payload size
  • Use compression (gzip, Brotli) for all responses
  • Consider GraphQL for complex data retrieval patterns

The optimal balance between request granularity and roundtrip efficiency depends on your specific use cases, but providing options to clients increases adaptability.

2. Caching Strategies

Sophisticated caching remains one of the most effective performance tools:

  • Implement HTTP cache headers correctly (ETag, Cache-Control)
  • Use CDNs for caching immutable resources
  • Consider distributed caching systems (Redis, Memcached) for dynamic content
  • Implement cache invalidation strategies based on resource state changes

Modern distributed caching with intelligent invalidation can reduce database load by 60-90% for read-heavy APIs.

3. Database Optimization

Database performance directly impacts API responsiveness:

  • Design database schemas specifically optimized for API access patterns
  • Implement data partitioning for high-volume data
  • Use read replicas for scaling query performance
  • Consider purpose-built databases for specific workloads (time-series, graph, etc.)

In 2025, the polyglot persistence approach—using multiple database technologies based on data access patterns—has become standard practice for high-performance APIs.

4. Asynchronous Processing

Not everything needs to happen synchronously:

  • Implement webhooks for event-driven architectures
  • Use message queues for processing time-consuming operations
  • Provide polling endpoints for job status checking
  • Consider Server-Sent Events or WebSockets for real-time updates

Decoupling time-intensive operations from the request-response cycle significantly improves perceived API performance.

5. Infrastructure Scaling

Modern cloud infrastructure provides powerful scaling capabilities:

  • Design for horizontal scaling with stateless API servers
  • Implement auto-scaling based on real-time metrics
  • Use edge computing for latency-sensitive operations
  • Consider multi-region deployments for global audiences

The capacity to scale dynamically in response to changing loads has become essential for maintaining consistent performance.

Testing Strategies for Resilient APIs

1. Comprehensive Test Coverage

APIs demand multi-layered testing approaches:

  • Unit tests for individual components and business logic
  • Integration tests for database interactions and external services
  • Contract tests to verify API specification compliance
  • End-to-end tests for critical user journeys

Aim for a testing pyramid with many fast unit tests and a smaller number of more comprehensive integration and end-to-end tests.

2. Performance Testing Under Load

Understanding how your API behaves under stress is critical:

  • Establish baseline performance metrics for all endpoints
  • Perform regular load testing to identify scaling limitations
  • Test concurrency handling with realistic usage patterns
  • Measure third-party dependency impacts on overall performance

Automated performance testing should be integrated into your CI/CD pipeline, with alerts for significant regressions.

3. Chaos Engineering for APIs

Resilience testing has become mainstream for critical APIs:

  • Simulate network latency and failures
  • Test with partial dependency outages
  • Implement error injection at various system layers
  • Validate fallback mechanisms and graceful degradation

Building APIs that degrade gracefully under adverse conditions has become a key differentiator for enterprise systems.

4. Security Testing

Continuous security validation should include:

  • Regular penetration testing by security professionals
  • Automated scanning for common vulnerabilities
  • Fuzz testing to identify unexpected input handling issues
  • Authentication and authorization bypass attempts

Security testing should shift left in the development process, with automated checks running from the earliest development stages.

5. Monitoring as Testing

Modern observability transforms monitoring into continuous testing:

  • Implement synthetic API calls to validate key functions
  • Set up alerting based on business-relevant metrics
  • Analyze traffic patterns to identify potential issues
  • Use AI-powered anomaly detection to spot unusual behavior

The line between testing and monitoring continues to blur, with production environments providing valuable insights for continuous improvement.

Building for the Future

As we look beyond 2025, several emerging trends will shape the next generation of API development:

  • API Sustainability: Energy-efficient design patterns to reduce the carbon footprint of digital interactions
  • Autonomous APIs: Self-optimizing interfaces that adapt to usage patterns without human intervention
  • Embedded AI: Integration of machine learning directly into API processing for intelligent data handling
  • Immutable APIs: Versioning approaches that guarantee perpetual compatibility for critical interfaces

By building on solid foundations while remaining adaptable to emerging technologies, your APIs can deliver lasting value to your organization and customers.

Conclusion

Building scalable RESTful APIs in 2025 requires balancing established best practices with emerging technologies and methodologies. The principles outlined in this article provide a framework for developing APIs that are not just functional, but scalable, secure, and future-ready.

At Leverture, we help our clients navigate these complexities by providing expert guidance on API strategy, design, and implementation. Whether you're building new APIs or evolving existing ones, focusing on thoughtful design, security, performance, and testing will position your interfaces for long-term success.

Ready to elevate your API development practices? Contact Leverture today for a consultation on how we can help you build APIs that scale with your business.

Navigating the critical build vs. buy decision with a structured framework that balances strategic considerations, total cost of ownership, and future business evolution.

The Strategic Considerations That Matter Most

Every technology leader faces this pivotal decision: should we build custom software or purchase an existing solution? With increasing options in both custom development and SaaS offerings, this choice has become more complex yet more consequential than ever.

At Leverture, we've guided numerous organizations through this critical decision process. While there's no one-size-fits-all answer, there is a structured framework that can help you navigate these waters with confidence.

Core Business Function vs. Supporting Process

The first question to ask: Is this application core to your competitive advantage or merely a supporting function?

Build when: The software directly enables your unique value proposition or competitive advantage. For example, Amazon built its logistics software because supply chain efficiency is fundamental to its business model.

Buy when: The function is necessary but not differentiating. Human resources information systems, accounting software, and email platforms typically fall into this category for most businesses.

Unique Requirements vs. Industry Standards

Build when: Your business processes are unique, proven to be effective, and would require significant compromise to fit into an off-the-shelf solution.

Buy when: Your requirements align with industry standards and best practices that are already well-established in commercial products.

Time-to-Market Considerations

Build when: You have the luxury of time and the long-term benefits of a custom solution outweigh the immediate need for implementation.

Buy when: You need a solution deployed quickly. Most SaaS products can be implemented in weeks or months, while custom development typically takes months or years.

Available Expertise and Resources

Build when: You have access to technical talent (either in-house or through trusted partners) with the relevant expertise and bandwidth.

Buy when: Your organization lacks the technical capabilities required for custom development and ongoing maintenance.

Total Cost of Ownership Calculations

The true cost of either approach extends far beyond the initial investment. Here's how to calculate the total cost of ownership (TCO) for both options:

For Buy (Commercial Off-The-Shelf Software):

  1. Licensing/Subscription Costs: Annual or monthly fees, often based on number of users
  2. Implementation Costs: Setup, data migration, integration with existing systems
  3. Customization Costs: Modifications to meet specific requirements
  4. Training Costs: User education and change management
  5. Ongoing Support: Internal resources needed to administer the system
  6. Integration Expenses: Connecting with other business systems
  7. Upgrade Costs: Expenses associated with new versions and features

For Build (Custom Development):

  1. Initial Development: Design, development, testing, and deployment
  2. Infrastructure: Hosting, servers, and other hardware requirements
  3. Maintenance: Bug fixes, updates, and security patches
  4. Enhancement: Adding new features and capabilities over time
  5. Support: Technical staff to assist users and manage the application
  6. Knowledge Transfer: Documentation and training for internal teams
  7. Opportunity Cost: Resources directed away from other initiatives

It's crucial to consider these costs over a 5-7 year timeframe, as the economics can shift dramatically over time. While custom solutions typically require higher upfront investment, subscription costs for commercial products accumulate and may exceed custom development costs in the long run.

When to Customize Off-The-Shelf Solutions

Sometimes the optimal approach is a middle path—customizing a commercial product to better align with your specific needs. This approach makes sense when:

The 80/20 Rule Applies: When an off-the-shelf solution meets approximately 80% of your requirements, and the remaining 20% can be addressed through available customization options.

The Platform Is Extensible: The solution offers robust APIs, extension frameworks, or customization capabilities designed for enterprise use.

You Need Some Speed Advantages: You can leverage the ready-made features while focusing customization efforts only on the truly unique aspects of your business.

Cost-Effective Compromise: When full custom development is prohibitively expensive, but a standard solution without modifications would create significant operational inefficiencies.

However, be cautious of excessive customization. We've seen organizations modify commercial products so extensively that they encounter the worst of both worlds: the high costs of custom development combined with the constraints of a commercial platform. A good rule of thumb: if customization will exceed 40% of the product's core functionality, a custom-built solution might be more appropriate.

Future-Proofing Your Technology Decisions

Whether you build or buy, technology decisions must account for future business evolution. Here's how to maintain flexibility:

Modular Architecture

If building custom, design with modular components that can be updated or replaced independently. This allows for gradual modernization rather than complete rewrites.

For commercial solutions, prioritize those with modular designs and clear upgrade paths that won't disrupt your customizations.

API-First Approach

Ensure any solution—custom or commercial—offers robust APIs for integration. This provides flexibility to connect with new systems and replace components as needed.

Data Portability

Your data is your most valuable asset. Ensure you maintain ownership and easy export capabilities to avoid vendor lock-in with commercial solutions.

Scalability Planning

Anticipate growth in users, transaction volume, and data storage. For custom solutions, design for scale from the beginning. For commercial products, understand scaling limitations and associated costs.

Technology Stack Longevity

For custom development, choose established technologies with strong community support rather than bleeding-edge options that may not stand the test of time.

A Decision Framework in Practice

To illustrate how this framework can be applied, consider a mid-sized financial services firm that needed a client portal for their wealth management clients:

  1. Strategic Assessment: Client experience was central to their competitive advantage, suggesting a build approach.
  2. Requirements Analysis: They needed portfolio visualization, financial planning tools, and document sharing capabilities integrated with their proprietary investment models.
  3. TCO Calculation: A five-year projection showed that while a custom solution required higher initial investment, the total cost would be lower than a heavily customized commercial option with ongoing licensing fees.
  4. Future Considerations: The firm anticipated expanding into new service areas, requiring a flexible system that could evolve with their business.

The decision: A custom-built portal with a modular design that could incorporate third-party components for standard functions while allowing for proprietary elements that differentiated their service offering.

Conclusion: A Balanced Approach

The build-vs-buy decision is rarely black and white. Many organizations benefit from a portfolio approach—building custom solutions for truly differentiating functions while purchasing commercial products for standardized processes.

At Leverture, we help our clients navigate these decisions with a clear methodology that balances immediate needs with long-term strategic goals. Our experience has shown that the right decision isn't just about technology—it's about aligning technology choices with business strategy to create sustainable competitive advantage.

Whether you're considering your next technology investment or evaluating your current application portfolio, applying this framework can help ensure your decisions create lasting value rather than technical debt.

Ready to make more informed build-vs-buy decisions for your organization? Contact Leverture today for a strategic technology consultation that aligns with your unique business objectives.

On a recent project, I was tasked with creating a new administration site to replace a piece of a monolithic legacy application.

On a recent project,  I was tasked with creating a new administration site to replace a piece of a monolithic legacy application.  

The Challenge

The legacy system did not have any API’s that we could reuse.  We were going to have to create the API’s from scratch.  Since the data would probably be needed elsewhere during the rewrite of the entire application I really wanted it to be able to reuse the data.  

With a traditional REST API, when you hit an endpoint you get exactly the structure it was programmed to give you.  

What happens if I need more data for the other functions or less data?  

I would have to either make another request or create a new endpoint to solve the problem.  One of the huge problems plaguing the legacy application is that the speed of the application was terrible and there are literally pages that have 150 server calls.  

The Goal

My goal was to provide a new system that was quick and that would require less rework throughout the process of moving other services out of the legacy application  I came across GraphQL several months ago and decided that it was not just cool but provided some simplifications to creating API’s and web development in general.  

What is GraphQL?

GraphQL is a query language for your API originally developed by Facebook.

What are the advantages of GraphQL?

It has many advantages over a simple REST API.  One of the huge advantages to GraphQL is that you have just one API endpoint and the client posts a query or mutation to get the exact result that you need.  

Here is a short list of why you should use GraphQL for your next API.

(1) With GraphQL you get only the data that you request.

GraphQL is a query language,  therefore with each request you specify exactly the fields that you need.  No more, no less.  

In subsequent requests you can also add or subtract fields from the requests to get more or less data all through the same endpoint.  

With REST, you will either have an endpoint that returns all the Data for that Schema or you will have to create multiple endpoints to provide for when you need less data.

In addition, GraphQL has filtering built into the language so you can add a where clause that allows you to get a filtered result set.  This would be accomplished with REST by changing the signature of the API endpoint.

(2) Schema Stitching allows you to have a GraphQL gateway with multiple micro sites behind it.  

The great thing about this is that the micro site can be  a REST site or GraphQL site. The gateway GraphQL site then allows you to query and set up relationships for the data from each site within the same query.  This has many advantages that will be addressed in a later post.

(3) With GraphQL, versioning is not required.

GraphQL allows the front end to specify only the fields that are needed.  When you add fields to a schema, you will not break consumers of the API.  You also do not have to create a mobile version and a desktop version of your API.  The consumers can just specify the data that is needed for their needs.

(4) Save time and bandwidth.

As Stated above, the client is able to specify only the data that they want and they can get multiple schema’s in the same request.  This prevents the client from having to make multiple round trip requests to the server.  

The end result is that the client is able to save time (both development and application) and reduce the bandwidth required to support the application.

How do you decide how to approach custom software development?

There are few, if any, businesses that can get by without robust digital infrastructure these days. Whether it’s a cashier and inventory system, automated payroll, a CRM or sales infrastructure, or the expensive, custom ones — an internal app or a consumer-facing mobile app — almost every business needs holistic digital infrastructure to succeed.

For many of these solutions, great off-the-rack solutions exist.

You probably don’t need to build a custom CRM most of the time — you can configure Salesforce to do what you need cheaper and more easily.

The same is probably true for your Point of Sale system — there are great white-label solutions ready to go right now already in the market.

But, when it comes to custom solutions tailor made to your business, your workflows, and your clients… off the rack might not cut it.

So how do you decide how to approach custom software development?

  • The biggest question we usually get is when does it make sense to offshore?
  • And what do we need to consider before/when we do?

Here’s our top 7 things to consider when choosing to offshore your software development needs.

1) It takes a lot of time to find quality on-shore developers

If you’re hiring a developer(s) on shore and full time, it takes a while to recruit for culture/aptitude fit. You’re not just trying to find the right person to build the product you have in mind — you’re hiring a staffer.

You need to make sure that staffer is the right fit for your culture and company if you’re shelling out benefits, etc.

And, as you’re probably aware, the demand for skilled developers far exceeds the number of talented developers available, it takes significant time and often more money than you realize to recruit the right one.

2) The ability to hire for a short-term commitment is a boon for you, and a point for offshore deployments

This is one of the most underrated benefits of outsourcing or hiring offshore. When you hire on a project basis, you can bring in an expert (or experts) with the specific aptitude you need for right now.

Then, as your needs shift in the future (or for different projects), you can bring in the appropriate expert without having to recruit and hire someone full time.

3) Follow-the-sun development schedule

Most outsourcing for software development today hails from India. Given the time zone differential between India and the U.S., it allows American enterprises to check work, generate feedback, and/or assign new programming tasks during our daytime.

Then, we receive completed work the next morning when we sign on because our Indian counterparts did that work during their day/our night.

4) Documentation, documentation, documentation

A pitfall to watch out for, though, goes hand-in-hand with follow-the-sun development calendars — the absolute necessity for clear and exhaustive documentation.

Whereas with a domestic or staff resource, you may be able to make a comment in a meeting and have that idea executed into code the next day, that’s usually not possible with an outsourced resource.

For one, you’re not usually in meetings together given the time differential.

And for another, there can be both language and cultural barriers for what you intend and what your outsourced developer hears / executes on.

As such, it’s imperative to document early, document often, document clearly and document comprehensively.

The only way to ensure there are no costly miscommunications is with excellent documentation, which can be a drag on the speed and ease of outsourcing if you’re not properly forewarned about this necessity.

5) Do you find a partner or do you hire a resource?

When it comes time to make your outsourcing decision, the bigger — and perhaps most important — choice you make is whether you enlist a partner or just hire what you deem to be the necessary set of resources.

By that I mean, if you need a developer or two to build your mobile app, but you have the technical acumen and vision and project management capability to spearhead it yourself, you might only hire those specific programming resources for your project.

But, if you need to build a complex, comprehensive mobile app and don’t already possess the know how (or already have it on your team), it will almost certainly behoove you to hire a development partner who already has quality programmers under contract/on staff so you’re not out there recruiting resources yourself (and are left to QA their work, especially if you don’t have the coding expertise to know how good that work actually is).

The right partner will have project managers, technical leads, data architects, etc. all on staff, meaning you’re only paying for the percentage of those resources’ time you actually use/need on your project.

6) Offshore quality

The biggest unknown when it comes to outsourcing technical resources is the quality of what and whom you’re hiring.

The whole idea behind outsourcing is getting what you need at a similar quality for less money; off-shore resources cost less per hour than on-shore ones do. Pretty simple, right?

While it really is that simple in concept, the difference between cut-rate resources and worthwhile ones can be huge, both in quality as well as in cost.

There is no doubt you can save money by outsourcing well, but if you aren’t vigilant or well versed in what you’re looking for, you can suffer huge lapses in quality for those cheaper rates.

Or, what many decision makers don’t think through, is that while the programming resources may be cheaper, you may end up spending way more of your and your team’s time overseeing those resources than you may realize.

So by that metric, it may not be saving you that much money in the long run because it’s eating into your valuable time instead.

Another point for the parter route is that you can usually split these differences.

The partner handles the time consuming parts of the process while paying offshore prices for development resources. That way, you can keep quality high while still keeping cost down.

7) Cost

I saved the best for last. No one is considering hiring offshore except and until cost comes into play.

If you can get the same quality of development work at significantly reduced rates, why wouldn’t you as a business leader?

But, as I laid out in point 6, it’s not always such an apples-to-apples comparison. You can’t forget or fail to factor in the management costs you’ll expend communicating with and monitoring the performance of your vendor (unless you hire a partner, that is).

According to a report cited in ZDNet, Aberdeen Groups research shows that more than

“76% of customers report project administration and vendor management costs to be far higher than expected, which won’t come as a surprise to anyone who has done any outsourcing.”

You absolutely can save money going with an outsourced resource, but you have to make sure you’re taking all the cost variables into account when you’re making that decision.

One last note before you go

Outsourcing can be one of the best decisions you and your company makes when it comes time to build a custom software solution.

But as you can see, there are a lot of factors to consider and weigh before choosing the correct path forward.

As such, it almost always helps to speak with an expert who can help guide you along in that process.

That’s where we come in — even if you don’t go with us as your partner, we can help you make sure you’re thinking through everything both holistically and granularly; that way, you know your money is actually going the farthest and you’re getting real quality for that dollar.

Almost all software development shops or internal development teams (and, really, any software project in its own right) follows an agile methodology these days

Almost all software development shops or internal development teams (and, really, any software project in its own right) follows an agile methodology these days. Almost no one intentionally develops software using waterfall any more (and to be honest, agile has become so ubiquitous that developers often don’t even list ‘agile methodology’ as a ‘Pro’ in a sales deck Pro/Con list, because almost no one uses waterfall any more).

All that to say, agile really is the name of the game when it comes to efficient and transparent software development.

But agility goes beyond best practices for software development — it’s the key to sustained success for any enterprise in the modern economy.

So how do you and your projects ascend beyond merely “agile” development methodology to become a fully-baked agile enterprise?

You become a fully-baked agile enterprise with Plasticity x Versatility.

Agile enterprises

The key to an agile enterprise is a mixture of plasticity and versatility. While they may seem somewhat similar in meaning, there are important distinctions between the two.

What is Versatility?

Versatility refers, in this context, to your ability (or your team’s ability or your client’s ability) to adapt and change business processes as needed without having to change architectures.

What is Plasticity?

Plasticity, on the other hand, refers to an enterprise’s ability to change architectures without impacting or altering business processes.

What do these actually mean in practice though? Why do they matter? And what makes agility so prized, anyway?

What is the virtue of agility?

We can talk about agility, plasticity and versatility in abstract or philosophical terms all we want. At the heart of the issue, though, is the corporate value of being agile.

Agility isn’t an end in itself; it’s not virtuous by nature. But, it is absolutely crucial to business success in the 21st century.

No matter the industry in which you operate, the most consistent hurdle you’ll encounter is change. The speed of innovation and disruption is so steep today that change is quite literally a guarantee; it’s a fact of life in our modern economy and society.

Why Agility is so valuable

That’s why agility provides so much value to organizations. If you’re agile, by definition, you can react to those changes quickly and adeptly, ensuring that you’re serving your stakeholders as well as possible, with the least amount of system down time or operational disruption.

If agility is a core focus of your enterprise, it means you’re best able to serve all your stakeholders both now and well into the future.

If you can execute on that mission, it means you’re well positioned permanently.

How do you get agile, then?

To achieve agility, then, you have to be laser focused on both plasticity and versatility.

Versatility

In practice, that means when your users’ requirements change (whether that’s internal stakeholders or clients, etc.), your applications have to adapt rapidly — that’s versatility. You should be able to actuate change requests in user needs or experiences without having to completely rework underlying system architectures.

Plasticity

At the same time, you have to be able to change your entire enterprise capabilities if there are wholesale changes in business objectives (again, for whichever stakeholder you’re serving in this instance). That’s plasticity. This means you can keep business units and processes operating at scale, efficiently, even while you’re modifying or enhancing underlying architectures for what comes next.

The Challenge

The challenge, as you have probably sussed out, is how to balance these two seemingly discordant goals — you want to be able to change business processes or not, or change architecture or not, based on whether you’re being versatile or plastic.

The Key

The key is that you have to be rigorous in your analysis of stated goals and optimal outcomes. You need to read between the lines of whatever your stakeholder is requesting or requires to realize if they truly need a plastic change, a versatile change, or all of the above.

Only when you begin thinking on all three axes can you achieve true agility.

Enterprise success isn’t an x-axis for versatility and a y-axis for plasticity where, so long as you’re moving upward and to the right, you’re succeeding. It’s a three-dimensional space with agility as your Z-axis.

Because agility for agility’s sake, which we outlined above, isn’t an intrinsic good. It’s a tool that allows you to achieve enterprise excellence when used wisely and effectively to achieve real business ends.

How you know if you are an agile enterprise

So, if you’re moving upward, to the right and forward in this theoretical space, with agility toward a worth business end as your Z-axis, then you’re an agile enterprise.

The ability to be agile in any one business decision, business process or business system pushes your enterprise toward permanent agility.

The more you move upward, rightward and forward, the more successful you and your company and your stakeholders will be.

The barrier to entry was dramatically reduced, opening doors for companies the world over.

For years, cloud computing was the future.

The ability to migrate core business services and infrastructure off site, without having to build and upkeep servers, secure them, etc., was a golden promise of the 21st century.

Turning a massive infrastructure cost into an ‘as-a-Service’ expenditure would revolutionize the business landscape forever — and sure enough, it did.

Small firms could lease just what they needed, without having to stand up costly data centers on their own. The barrier to entry was dramatically reduced, opening doors for companies the world over.

But as cloud providers became larger and larger parts of the business world, the number of providers proliferated in kind.

Cloud computing has exploded in popularity. According the Gartner, one of the foremost market research firms,

worldwide end-user spending on public cloud services is forecast to grow 23.1% in 2021 to total $332.3 billion, up from $270 billion in 2020.

“The events of last year allowed CIOs to overcome any reluctance of moving mission critical workloads from on-premises to the cloud,” Sid Nagsaid, research vice president at Gartner.

“Even absent the pandemic there would still be a loss of appetite for data centers.

Emerging technologies such as containerization, virtualization and edge computing are becoming more mainstream and driving additional cloud spending.

Simply put, the pandemic served as a multiplier for CIOs’ interest in the cloud.”

As more and more services move to the cloud, though, how do you if you’re selecting the right cloud provider for you and your company’s needs?

We’ve broken down your decision tree into four core areas, with the most important considerations under each:

(1) The health and reputation of the cloud providers

Financial health:

  • Is the service provider on strong financial footing?
  • Will they be there in 5 years? 10?
  • If a provider is undercutting competitors on price, ask yourself — are they the real deal?
  • And will they be here for us well into the future?

Reputation:

  • Does the provider have a sterling reputation in the space?
  • Who are their partners?
  • Are there a bevy of public, third party reviews attesting to their expertise and professionalism?
  • Finally, talk to customers in a similar situation to you to find out what their experience has been.

Good governance and risk management:

  • Is it easy to ascertain the company’s management structure with clear lines of responsibility?
  • Do they have well-established risk management protocols that you can see for yourself?
  • Much as you need to know if the company is adequately capitalized, so too must it be well governed.

Business/technology fit:

  • Does the provider truly understand your business and what you’re trying to do?
  • There has to be a strong fit between the provider and your firm’s goals.

(2) Metrics and monitoring

Service Level Agreements (SLAs):

  • Any provider should be able to guarantee a minimum level of service that suits your needs, put it in a contract, and uphold said contract

Performance KPIs:

  • You should be able to pull performance reports with metrics important to you. If a provider doesn’t offer this, that should be a yellow flag at best.

Resource & config monitoring and management:

  • Your provider of choice should be able to monitor the services they’re providing to you as well as be able to update you on any changes made to said system.

Cost controls and billing:

  • Cost monitoring and billing ought to be automated and transparent. You have to be able to monitor what resources you’re using, at what pace and intensity, so you don’t end up running up unexpected bills.

(3) Technical capabilities

Deployment and change management:

  • It should be easy for you to deploy, manage, and upgrade your software and applications. If a provider can’t prove their aptitude here, keep it moving. There also should be documented and formal processes for requesting, logging, approving, testing, and accepting changes.

APIs:

  • Any worthwhile provider should use standard APIs and data transforms. If it’s hard to connect to their cloud, again, it’s not worth your time or money.

(4) Cybersecurity

Security infrastructure:

  • No matter the service or system you’re leasing from a provider, there ought to be a comprehensive and robust security infrastructure. Cloud services simply do not provide value to you if their security is anything less than exemplary.

Security policies and documentation:

  • There should be well-established and documented security policies and procedures with strict access control for both provider and customer systems.

Identity management:

  • Changes to any software or hardware system component should be authorized on an individual or group role basis; two-factor authentication should be required for anyone to change an application or data.

Data backup and retention:

  • Few things are as important as data backup and retention — if something infects your local network, you have to know your backups are in the cloud, protected and operational for a rebuild.

Physical security:

  • While a physical incursion to a server farm is unlikely, it must still be protected against. Furthermore, your cloud provider should have environmental safeguards to protect both equipment and your data from a disruptive event (hurricanes, floods, power outages, etc.). There should be redundant networking and power and a documented disaster recovery and business continuity plan.

There are, as you can see, a lot of factors to consider when you’re selecting a cloud provider for your business.

When it comes time to build out your technology stack, update your systems, or launch a new digital product, many small to medium-sized companies hire a technology partner to help lead them through high-stakes choices like these (especially if you don’t have a deep reservoir of technology expertise on staff / don’t want to hire those kind of resources on a full-time basis).

That’s where we come in — not only can we help you make the tough decisions, we can incorporate those decisions into technology plans and execution programs that put you on a path to sustained success marked by a sizable market edge.

Changing your mindset and staffing approach to embrace DevOps can legitimately revolutionize your company (and its future prospects).

If you’re a decision maker for a Small or Medium-Sized Business (SMB), finding a market edge through technology can be both invaluable and daunting.

Unlike large enterprises, you may not have a deep roster of technical expertise on staff to conceptualize and execute on large technological initiatives.

But, also unlike large enterprises, bringing a new technological tool or service to your firm’s business offering can provide an outsized advantage over your similar-sized rivals because those rivals are also less likely to have the deep pockets and tech resources that a large enterprise would.

It’s not just features or products that make a difference, though — the way your organization approaches technology can have a massive impact on you, your customers and your market edge. That’s where DevOps comes in.

How does DevOps work?

At its best, DevOps unifies people, process, and technology to bring better products to customers faster. Changing your mindset and staffing approach to embrace DevOps can legitimately revolutionize your company (and its future prospects).

What is DevOps anyway?

‘DevOps’ the word is a combination of development and operations. That portmanteau isn’t just clever wordplay, though — it represents a novel approach to team building, software development, staffing… really your entire firm’s outlook. So what exactly is it?

DevOps Defined

At its base level, DevOps coordinates and fuses formerly siloed roles— things like development, IT operations, quality engineering, security, etc. By breaking down these silos to better coordinate and collaborate, your company and its technology operations can produce more reliable products.

As Microsoft concluded,

adopting a “DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve business goals faster.”

DevOps for application development

When we’re focusing on software or app development, DevOps typically divides the lifecycle into four stages: Plan, Develop, Deliver, Operate.

Plan Phase

In the plan phase, DevOps teams ideate, define, and describe features and capabilities of the applications and systems they are building.

Develop Phase

The develop phase includes all aspects of coding — writing, testing, reviewing, and integration of that code.

Deliver Phase

Delivery is the process of deploying applications into production environments in a consistent and reliable way.

Operate Phase

The operate phase involves maintaining, monitoring, and troubleshooting applications in production environments.

These stages are present in every application development regardless of outlook.

How DevOps Differs

But instead of the classic version of app development where one team or one individual owns a particular stage that most aligns with their job title, in a DevOps framework, each team is responsible for each stage together.

For example, developers are responsible not only for innovation and quality in the develop phase, but also for stellar performance and stability in the operate phase (especially as it relates to changes their development decisions lead to in the operate phase).

Likewise, IT operators have to keep an eye out for  governance, security, and compliance in the plan and develop phase, not just the operate phase.

DevOps reimagines software development

Beyond breaking down silos and accountability barriers, though, DevOps also reimagines the entire way you develop and deliver software.

DevOps increases agility by design because you typically release  software in shorter cycles. When you have a shorter release cycle, that makes planning, risk management and system stability impact easier and more manageable because progress is more incremental.

Instead of releasing a huge new suite of products or features in one giant dump, you make smaller, more incremental releases along the way.

This improves your planning while negatively impacting your live environment far less than a standard development lifecycle.

Shortening the release cycle also allows organizations to adapt and react to evolving customer needs and competitive pressure better, too.

To achieve this, though, requires a change in culture as well as a change in aim. As IBM put so well:

At the organizational level, DevOps requires continuous communication, collaboration and shared responsibility among all software delivery stakeholders – software development and IT operations teams for certain, but also security, compliance, governance, risk and line-of-business teams – to innovate quickly and continually, and to build quality into software from the start.

The best way to accomplish [DevOps] is to break down these silos and reorganize them into cross-functional, autonomous DevOps teams that can work on code projects from start to finish – planning to feedback – without making handoffs to, or waiting for approvals from, other teams.

When put in the context of agile development, the shared accountability and collaboration are the bedrock of having a shared product focus that has a valuable outcome.

DevOps for everything else

By increasing visibility, accountability, and collaboration between teams, you break down silos not only within the software life cycle, but across your organization in its entirety.

Especially for SMBs where engineers or tech staff often wear multiple hats in a lean environment, implementing DevOps throughout the org can radically transform business results beyond just tech concerns.

For SMBs, though, it can often be difficult to conceptualize and implement a full shift to a DevOps mindset on your own — that’s where we come in.

It can be hard to figure out where to start, where to focus more of your effort on, what DevOps looks like in your industry with your specific challenges (all while trying to actually run your business).

That’s why partnering with a strategic technology expert can help you conceptualize and realize a shift to the DevOps frame of mind organization-wide.

So if you’re interested in making the leap, drop us a line — we’d love to chat.

Regardless of where you sit in the DevOps pipeline — developer, site reliability engineer, IT Ops specialist, program manager, etc. — monitoring is mission-critical.

Regardless of where you sit in the DevOps pipeline — developer, site reliability engineer, IT Ops specialist, program manager, etc. — monitoring is mission-critical. You simply cannot deliver on the highest ideals of DevOps’ continuous integration and continuous development without robust and insightful end-to-end monitoring.

If you begin with end-to-end visibility across the health of your resources, you can “drill down to the most probable root cause of a problem, even to actual lines of code, fix the issue in your app or infrastructure, and re-deploy in a matter of minutes,”

Rahul Bagaria, Senior Product Manager for Azure Monitoring & Analytics at Microsoft wrote.

“If you have a robust monitoring pipeline setup, you should be able to find and fix issues way before it starts impacting your customers.”

Monitoring in Azure means continuous

Everything about transforming your corporate culture to a DevOps model relies on continuousness. Continuous integration. Continuous development. And to cap it all off? Continuous monitoring (CM).

CM is the natural outflow of the DevOps approach to work environments.

If you can incorporate monitoring across each phase of your DevOps and IT Ops cycles, you can better ensure the health, performance and reliability of your apps and infrastructure continuously as they flow through developers into production and on to customers.

Microsoft graphic demonstrating the Azure monitoring pipeline

Azure Monitor

Azure Monitor allows you to collect, analyze and act on telemetry data from both your Azure and on-premise environments.

By using this suite of tools, you can maximize performance and the availability of your applications while proactively identifying problems in seconds.

According to Microsoft, you can also

“store and analyze all your operational telemetry in a centralized, fully managed, scalable data store that’s optimized for performance and cost.”

Azure Monitor will also integrate with DevOps, issue management, IT service management, security information and event management tools.

But, to leverage Azure Monitor to its fullest, you have to achieve continuous monitoring; that requires three primary areas of insight: monitoring your applications, monitoring your infrastructure and monitoring your network.

Monitor your applications

The first step toward full observability is to enable monitoring across all your web apps and services. That means adding Azure Monitor Application Insights SDKs to your apps at the code level.

But, once you’ve done that, you’ll have everything you need to monitor availability, performance and usage of your web applications regardless of where they’re hosted (Azure, on-premise, etc.).

Whether you’re working in .NET, Java or Node.js, Azure Monitor integrates seamlessly. It also plays nice with DevOps processes and tools like Azure DevOps, Jira or PagerDuty.

“Once you have monitoring enabled across all your apps you can easily visualize end-to-end transactions and connections across all the components,” Bagaria wrote.

Monitor your infrastructure

Once you have full transparency into every application in your stack, next comes your infrastructure. It’s always tricky to predict what components of your application stack might have an issue at any given time, so it’s mission-critical to monitor every relevant component.

With Monitor, you can analyze and optimize the performance of your infrastructure, including:

  • Virtual machines (VMs)
  • Azure Kubernetes Service (AKS)
  • Azure Storage, and databases

Yes, you can even monitor your Linux and Windows VMs, their health and their dependencies… all on a single interface.

“Azure Monitor can help you track the health and performance of your entire hybrid infrastructure, be it VMs, Containers, Storage, Network, or any other Azure services."

"You automatically get platform metrics, activity logs, and diagnostics logs from most of your Azure resources and can enable deeper monitoring for virtual machines or AKS clusters with a simple button click on the Azure Portal or installing an agent on your servers.”

Monitor your network

Because everything moves through your network when working in a cloud or hybrid environment, you can’t achieve continuous monitoring without insight into that network.

With Azure Monitor, we can help you diagnose networking issues without logging into your virtual machines. Y

ou can trigger a packet capture, diagnose routing issues, analyze network security group flow logs, and gain visibility and control over your Azure network, all within the same monitoring platform.

Once you’re monitoring your apps, your infrastructure and your network in one solution, you can start to truly unlock the power of continuous monitoring.

Some best practices for Azure Monitoring

Set up actionable alerts with notifications and/or remediation:

You can’t achieve continuous monitoring without a robust alerting pipeline. As such, we recommend setting up actionable alerts for all predictable failure states.

These could be triggered by static or dynamic thresholds with actions coming on top of the alerts. The alerts could be as simple as text alerts, email, or push notifications for more simple triggered instances.

And, when possible, there are ways to design remediation into the workflow with Azure Automation Runbooks or Auto-scaling in case of elastic workloads.

Prepare role-based dashboards and workbooks for reporting

Both sides of the Dev and Ops equation ought to have access to the same telemetry and the same tools.

Azure Monitor is

“designed as a unified monitoring solution for the entire team, and you can easily prepare custom role-based dashboards based on common metrics & logs,” according to Bagaria.

You can also create multiple dashboards within the Azure portal, each of which can include tiles visualizing data from multiple Azure resources touching different resource groups and subscriptions.

You can further pin relevant charts and views from Azure Application Insights to create custom dashboards; this can paint a complete picture of the health and performance of any application, infrastructure element or network state… all tuned to the reporting needs of individual teams or roles in your organization.

Continuously optimize through the ‘build, measure, learn’ framework:

Building the right solution for your team or your customers is never a one-off success.

It’s an iterative process that takes multiple swings to get right (and even then, you have to continue monitoring and honing the solution for it to remain optimal).

You can’t ‘build, measure, learn’ without measuring… and you can’t measure without monitoring.

By setting up Azure Monitor correctly in your tech stack, you can track and optimize your health, availability, performance, and reliability… all while tracking end-user behavior and engagement to optimize your customer experience.

Azure Monitor even provides impact correlation, which can guide areas of prioritization as well as accurate KPI formation.

The Microsoft Model

Microsoft didn’t just develop Azure and Azure Monitor… they completely rebuilt their IT culture and systems to transition to a distributed, agile DevOps approach by leveraging Azure and Azure Monitor.

While we’re just pulling some of the more interesting bits from Microsoft’s own case study, it’s worth reading it in full to learn how vital Azure Monitoring is to an optimal tech environment for any business:

We’re nurturing a transformation to DevOps culture within IT.

DevOps is the union of people, process, and products to enable continued business value to our customers. It puts technology and solution development in the hands of the people who know what the business needs, and it creates a more agile, responsive technology culture.

DevOps has transformed the way that solutions are developed and operated.

The self-service focus and scalable infrastructure allowed us to move our monitoring environment from a centrally controlled and isolated manual service to an Azure-first, DevOps-driven, distributed, and consumable service that our business groups could use to gain true insight into their app environments.

We established several goals early in the process to help guide our transformation efforts:

We’re nurturing a transformation to DevOps culture within IT. DevOps is the union of people, process, and products to enable continued business value to our customers. It puts technology and solution development in the hands of the people who know what the business needs, and it creates a more agile, responsive technology culture. DevOps has transformed the way that solutions are developed and operated.

The self-service focus and scalable infrastructure allowed us to move our monitoring environment from a centrally controlled and isolated manual service to an Azure-first, DevOps-driven, distributed, and consumable service that our business groups could use to gain true insight into their app environments. We established several goals early in the process to help guide our transformation efforts:

Democratize common monitoring and alerting management tasks.

The centralized management and maintenance structure simply didn’t fit the DevOps model.

We needed to put the controls for monitoring and alerting in our business app engineers’ hands and give them the freedom to create and manage their monitors, alerts, and reports.

Create a consumable, self-service solution.

To give our business app engineers control over monitoring and alerting, we needed to give them a solution that didn’t require continued centralized IT intervention.

We wanted to ensure that our solution provided automation and self-service capabilities that enabled business app engineers to start creating and tuning their monitoring and alerting solution when they wanted, and to grow it at their own pace.

Move from performance monitoring to health awareness.

Individual metrics provided the raw data for our monitoring environment, but we wanted to provide a more intuitive view of the health of individual apps and the environment as a whole.

The large number of platform as a service (PaaS) solutions required us to examine end-to-end health.

Infrastructure monitoring and raw telemetry didn’t provide insight into the true nature of the app environment, and we wanted to expose the underlying information important to the individual business app owner.

If Microsoft understands the centrality of continuous monitoring to a healthy IT environment, we think it’s probably a good idea for you too.

That’s where we come in, though — we can help walk you through the process of not only migrating to an Azure-based tech stack, but also building out the robust monitoring necessary for true DevOps bliss.

Give us a call so we can put Azure Monitoring to work for you.

Leverture wins at code launch!

Hackathons can mean different things to different people. Some are great, others are rubbish. Codelaunch is the former.

Started in DFW in 2012, Codelaunch takes a different approach to most other hackathons.

Instead of just setting a random development task or goal for companies to compete on (or, more crucially, demanding equity from winners or participants), Codelaunch took the novel approach of sourcing early-stage startups looking for seed funding (but without a minimum viable product yet developed) and pairing them with local development shops for “free” development work.

The startups get elite development work ostensibly for free, helping push them toward a minimum viable product and future seed funding.

The development shops get to help promising entrepreneurs and their companies take the next step, while also proving their chops on a big stage in front of a host of companies who need… you guessed it, development work at some point.

The development shops “sponsor” seed startups by giving their development time away, and then at the end of Codelaunch, there’s a vote for the winning company/dev shop team.

Leverture and Autix Automotive took home the championship belt at CodeLaunch DFW 2021.

autix

CodeLaunch

According to their own website, CodeLaunch is the “seed accelerator event that pairs early stage tech startups with professional software development teams to accelerate their trajectory toward MVP, seed funding, and beyond… Each of these incredible startups will receive a professional product development hackathon from one of our seed sponsors. They will work together for 24 hours prior to our final event to prepare their product for the competition.”

From its first event in DFW in 2012, CodeLaunch has since evolved into a traveling seed accelerator and startup expo.

“What makes CodeLaunch truly unique is the professional hackathon. Local ‘dev shops’ come together to build out as much of each Finalists’ MVP as possible.

The Finalists then demo these products live and for the first time in front of the CodeLaunch audience, who then vote for a winner. CodeLaunch DFW 2021 hackathon teams are provided by Dialexa, Leverture, Allata, Improving Dallas, and Code Authority.”

CodeLaunch DFW 2021

Frisco Mayor Jeff Cheney welcomed the audience at Comerica Center on November 17th, saying that “entrepreneurism here in our city is part of our DNA, part of our culture”, according to DallasInnovates.

“We treat our entire city as an innovation hub, and we’re starting to get attention across the country,”

– Cheney said.

“We have over 250 startups here in Frisco—and we’re just getting started. We’re going to be the first large city in the country to start with drone deliveries—it’s actually starting here in the next couple of weeks, where you can order a coffee from Starbucks and have it drone-delivered to your house in seven minutes.”

The mayor said Frisco is also working with partners like the University of North Texas “to prepare for the creative economy, and really build our own companies—one of which may be awarded here tonight and, with your services, to become the next great company.”

Leverture takes home top prize with Autix Automotive

Jake Hamann, CEO of Autix, described his company as “an online app platform for automotive enthusiasts, enabling them to create, share, and explore custom vehicle profiles.”

During his on-stage presentation, he said he got the idea for Autix out of frustration with the experience of modifying his Jeep. He found it hard to find resources and ways to share tips and insights.

Even online Jeep clubs weren’t enough of a help, as he told it. Hamann soon realized “the problem wasn’t just with the Jeep enthusiast market,” but rather one that was shared by auto enthusiasts across the board.

The resulting idea was Autix. “32 million households a year modify their vehicles,” Hamann said onstage.

“SEMA—the Specialty Equipment Manufacturers Association — estimated in 2019 that it’s a $45.8 billion industry … Automotive hobbyists and enthusiasts lack the means by which to build, share, and view profiles of customized vehicles,” Hamann said.

Hamann reckons that if he can attract a sizable majority of these folks to one destination, it could pay big dividends.

“Nine out of 10 automotive enthusiasts are likely to recommend their vehicle to somebody else—and enthusiasts are four times more valuable to marketers than the average consumer.”

Each development shop works with their respective startup for 24 hours ahead of the hackathon.

“Leverture has been a great development partner for us,”

– Hamann said before the event to DallasInnovates.

Then, after the hackathon is completed, attendees vote on the winning product/partnership.

Leverture and Autix took home the top prize, with Leverture helping Autix bring their vision to life in a way that could help Autix secure the $1.5MM in seed funding they’re seeking.

The linchpin to a successful offshore project is based in a hybrid approach that prizes an onshore project manager.

What used to be a novel approach to software development has become the norm.

Offshore developers have gotten good enough (and have remained cheap enough) that much of the software development ecosystem exists outside the confines of our continental borders.

This makes a ton of sense, right? Great code is great code, regardless of where it hails from.

So why not save a ton of money by using offshore development resources if you can deliver the same (or at least very similar) quality of project?

It's not that simple

Ohhhhh if only it was that simple. There are a lot of givens in that comparison that, in the real world, are anything but given.

For one, you’re presupposing the offshore resources can deliver a product of the same or similar quality to what you’d use on shore.

For another, you’re assuming that the cost to get there will be less (while the cost per developer hour will certainly be less going with offshore, if the quality isn’t the same from the jump, you may have to spend a lot more hours delivering the same work product… which eats into any cost savings you were hoping to generate while delaying roll out, which is a cost consideration all its own).

So how do you reap the benefits of an offshore development project while avoiding the pitfalls?

The onshore project manager

From our years working on both onshore and offshore development projects, the linchpin to a successful offshore project is based in a hybrid approach that prizes an onshore project manager.

Most of the obstacles that trip up otherwise strong companies or strong ideas can be boiled down to the three C’s:

  • Communication
  • Coordination
  • Collaboration.

We think utilizing an onshore project manager can solve for any issues likely to develop within any of those C’s.

Communication

Offshore development firms almost always boast a high degree of English fluency; it’s been our experience they’re right!

But there’s a difference in conversational fluency, or even technical fluency (e.g. coding languages, tech requirements, etc.) and true fluency.

I’m not making a value judgement — I don’t speak another language, so I don’t have a leg to stand on here.

What I am saying, though, is that the nuances and norms of American English are often trickier to navigate than many Americans can muster, much less folks who are speaking English as a second language.

Furthermore, there’s an entire intricate dance between clients saying what they want and us sussing out what they actually want or need… then translating that into clear instructions for an audience of developers speaking English as a second language.

It’s more delicate still to work through challenging conversations between client and developer (we can’t build that for this amount of money, this feature isn’t possible, etc.) without risk of souring the relationship.

Onshore project manager

These nuances in communication are almost always alleviated by an onshore project manager.

They provide a single point of contact through which all requirements, feedback and assignments flow — in both directions.

Developers know who to go to with questions and our clients know to whom they should direct feedback.

Now, if you have a crap onshore project manager, you’re still going to have communication (and coordination and collaboration) problems… but that’s true of any position.

If you hire good people, you’ll be rewarded with good work product.

We’ve found that identifying gifted onshore project mangers can be the difference between smooth projects everyone is proud to have worked on… and the opposite.

Coordination

When working on something complex like software development, it’s basically a given that coordination is going to be a challenge.

Coordination in software development is always a challenge, but when you add the offshore element to the mix, it becomes infinitely more so.

  • Which developer is working on what feature?
  • For how many hours?
  • Who is coordinating integration and QA for that?
  • To whom do we direct feedback for the most recent version push?

Coordinating between so many moving parts can be a nightmare… then throw in 11-14 hours of time difference? That can leave you in a tight spot.

Again, we’d point you toward an onshore project manager.

The single point of contact can organize and prioritize resources, and make sure all of that is communicated to all relevant stakeholders.

Furthermore, if you have one person managing the time shift, you as a client don’t have to think through that or manage that yourself.

Collaboration

One of the points I made earlier was about the difference between what our clients say they want, and what they actually want.

More often than not, our clients have a strong vision of what they want the final product to look like, feel like, and what it ought to do.

The trickier part is elucidating the exact details of how you get the software to do whatever that is.

Give me "exactly"  what I want

In our experience, a ton of offshore projects are bid out and delivered in a model of: client says, “this is exactly what I want, give me that.”

So, your offshore development partner gives you exactly that, and not one single thing more.

If you missed anything in your requirements document, or if you expect your development team to fill in any blanks, or if you want suggestions that could help with stability or usability, etc… you’re kinda outta luck.

Part of what you get for the low price you’ve bargained is exactly what you asked for, with nothing else provided.

If your technical writing staff is absolute aces, then this might be all you need (or, for instance, if your project is pretty small and straightforward).

If you can write a perfect requirement doc that encapsulates everything you want and need (with 100% certainty that nothing will change along the way, or you’ll get a new, fresh idea during development), then you’re in the clear.

But for the vast majority of us, that’s not super realistic.

Give me what I "actually" want

Things change along the way during software development all the time.

Developers (at least those worth working with) have deep experience and expertise and might suggest a better or more elegant way to accomplish what you actually want, instead of just taking orders and delivering on them.

Onshore project manager bridges the gap between "exactly" and "actually"

An onshore project manager can be a vital conduit through which requirements flow and ideas ebb.

They can ensure you are indeed getting exactly what you want, but not confining you to a rigid set of documented requirements that might not fully encapsulate your vision.

Establishing a truly collaborative relationship takes time and trust to establish, and working with a single point of contact in your onshore project manager can help fulfill that ideal.

In conclusion

Will an onshore project manager solve every problem in your offshore development project?

Probably not. But from our experience, we’ve found having that singular, stateside point of contact with your development partner can head off a bevy of pitfalls.

It’s why we recommend it to our clients, and it’s why we think you should implement it too.

Leverture was approached by a now-client with the challenge to update our client’s monolithic legacy application to a microservices architecture.

The Problem

Leverture was approached by a now-client with the challenge to update our client’s monolithic legacy application to a microservices architecture.

As part of the rewrite, we upgraded their authorization and authentication protocols by implementing Auth0.

However, in their legacy application, the customer service personnel had the ability to impersonate users in order to help them work through problems they were encountering. Based on the client’s business needs, this feature was integral to their product offering and so was an absolute “must have” in the new microservices environment too.

There’s just one problem… while Auth0 used to support user impersonation for use cases like customer servicing, Auth0 had recently discontinued the feature. That left us in the less-than-stellar predicament of having to figure it out on our own.

Possible Solution

The first possible solution is relatively simple, but dangerously shortsighted.

To wit, since all the information would be stored in a class, you could simply add an impersonating flag and additional user properties to that class. When the system encounters the impersonating flag, you can just use the additional properties to implement the impersonation.

This is both feasible and will almost assuredly get the job done… but it isn’t a great idea.

You will most likely make the application harder to both maintain and test in the future.  

Plus, you’ll end up having to add a significant amount of code via conditional statements littered throughout your application to check the impersonation flag.

This seems like quite the nightmare for either yourself or some other unsuspecting programmer down the road. Let’s do better than that, shall we?!

Solution

A more elegant approach would be to find a design pattern that solves the problem for you.

As in all things software, intentional, proper design will make the software easier to maintain.

For this particular use case, I recommend using the decorator pattern; it’s a structural pattern that allows you to add responsibilities dynamically to an object.

It consists of:

  • a component
  • a concrete class
  • a decorator class that is both a component while also having a component.

You are effectively wrapping the concrete class with another class that allows you to add functionality to it.

FLOW

While that may appear confusing, going through the flow will demonstrate its elegance.

First, the customer service representative will choose to impersonate a user.

Then, an API call will be made to Auth0 to add an impersonation user ID to their user_meta_data in Auth0.

Next, we will need to redirect to the customer’s application.

The customer service representative will be logged in as themselves, but the application will check for the impersonation ID in the user meta data.

If the ID is present, we will need to call the Auth0 API to get that user’s information and then override the logged-in customer service representative information.  

The POC

As you can see from the UML diagram above, we are going to build an Abstract User Object, a Concrete User Object and Decorator Object Impersonated User.

Let’s start by building our abstract User Object:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace impsersonationDemo.User_Objects
{
    public abstract class BaseUser
    {
        public string fullname { get; set; }
        public List<string> permissions { get; set; }       
        public  int userId { get; set; }
        public virtual  void LogStatement()
        {
            Console.WriteLine($"{userId} {fullname} has logged in!");
        }
    }

}

In the BaseUser abstract class, we have all the information relevant to the user.

We have the name, permissions and User ID as well as a log statement.

Then, we’ll create a concrete implementation of this abstract class.

1using System;
2using System.Collections.Generic;
3using System.Linq;
4using System.Text;
5using System.Threading.Tasks;
6
7namespace impsersonationDemo.User_Objects
8{
9    public class GenericUser : BaseUser
10    {          
11        public GenericUser(int userId, string fullname, List<string> permission )
12        {
13            this.userId = userId;
14            this.fullname = fullname;           
15            this.permissions = permission;
16        }
17    }
18}
19
20

Now we can create our decorator class.

As we said before, it is a BaseUser, but it also has a BaseUser.

This is how this might look:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace impsersonationDemo.User_Objects
{
    public class ImpersonatedUser : BaseUser
    {
        private BaseUser adminUser;      
        private int _userUserId;
        public ImpersonatedUser(BaseUser user , BaseUser AdminUser)
        {
            adminUser = AdminUser;
            this.fullname = user.fullname;            
            this.permissions = user.permissions;
            this.userId = adminUser.userId;
            _userUserId = user.userId;
        }
       
        public override void LogStatement()
        {
            Console.WriteLine($"{adminUser.userId} { adminUser.fullname } is impersonating {_userUserId} {fullname} ");
        }
    }
}

As you can see, this is different from the GenericUser class because in the constructor, we are passing in a User object and an Admin User object.

This allows us to add the functionality to the User and effectively take over the impersonated user’s identity while also keeping the Admin User’s information.

I am driving this point home by using both the admin user and impersonated user’s information in the overridden LogStatement method.

using impsersonationDemo.User_Objects;
using System;
using System.Collections.Generic;

namespace impsersonationDemo
{
    class Program
    {
        static void Main(string[] args)
        {
            ConsoleKeyInfo clinicKey;

            //Initialize User List
            var permissions2 = new List<string>();
            permissions2.Add("isAdmin");
            permissions2.Add("read:A");
            permissions2.Add("read:B" );
            permissions2.Add("read:C" );
            permissions2.Add("read:D");
            var adminUser = new GenericUser(400,"Sally Admin", permissions2);

            permissions2 = new List<string>();
            permissions2.Add("read:A");
            permissions2.Add("read:B");
            permissions2.Add("read:C");           
            var user1 = new GenericUser(12,"Gerry User", permissions2);

            permissions2 = new List<string>();
            permissions2.Add("read:A");
            permissions2.Add("read:B");          
            var user2 = new GenericUser(50, "Mary Maid", permissions2);

            permissions2 = new List<string>();
            permissions2.Add("read:A");          
            var user3 = new GenericUser(70, "John Smith", permissions2);
            do
            {
                Console.WriteLine();
                Console.WriteLine($"Welcome to the Customer Service screen {adminUser.userId} {adminUser.fullname},  You have the following permissions {String.Join(",", adminUser.permissions)}");
                Console.WriteLine();
                Console.WriteLine("Here is a list of users");
                Console.WriteLine($"Press 1 - {user1.userId} {user1.fullname}  {String.Join(",", user1.permissions)}");
                Console.WriteLine($"Press 2 - {user2.userId} {user2.fullname}  {String.Join(",", user2.permissions)}");
                Console.WriteLine($"Press 3 - {user3.userId} {user3.fullname}  {String.Join(",", user3.permissions)}");
                Console.WriteLine($"Press 4 - Go directly to Clinic Portal as your self");
                BaseUser impsonatedUser = adminUser;
                Console.WriteLine();
                var key = Console.ReadKey();
                switch (key.Key)
                {
                    case ConsoleKey.D1:
                    case ConsoleKey.NumPad1:
                        impsonatedUser = new ImpersonatedUser(user1, adminUser);
                        break;
                    case ConsoleKey.D2:
                    case ConsoleKey.NumPad2:
                        impsonatedUser = new ImpersonatedUser(user2, adminUser);
                        break;
                    case ConsoleKey.D3:
                    case ConsoleKey.NumPad3:
                        impsonatedUser = new ImpersonatedUser(user3, adminUser);
                        break;
                    case ConsoleKey.D4:
                    case ConsoleKey.NumPad4:
                        impsonatedUser = adminUser;
                        break;
                    default:
                        break;
                }
                Console.WriteLine();
                Console.WriteLine($"Welcome to the User Portal Home screen");
                Console.WriteLine();
                impsonatedUser.LogStatement();
                Console.WriteLine($"User Id {impsonatedUser.userId}");
                Console.WriteLine(impsonatedUser.fullname);
                Console.WriteLine( String.Join(",", impsonatedUser.permissions));
                Console.WriteLine();
                Console.WriteLine("What would you like to do?");
                Console.WriteLine($"Press 1 - finish impersonation");
                Console.WriteLine($"Press any other key - exit program");
                clinicKey = Console.ReadKey();
                
            } while (clinicKey.Key == ConsoleKey.NumPad1 || clinicKey.Key == ConsoleKey.D1);
            Console.WriteLine();
            Console.WriteLine("Good Bye");
        }
    }
}

In the above code block demonstrating how to use these classes, you can see that we are initializing each user as a generic user then displaying them in a list on the screen.

When the admin user chooses a user to impersonate, we create an impersonated user by passing in the constructor the user information and the admin user’s information.

Finally, we are logged into the User Portal as the impersonated user with their permissions and information.

The 4th option allows the admin to just go straight to the user portal with their own permissions.

Final Implementation

While I haven’t implemented this yet, I feel like I’ve proven the concept will work.

In order to get this to work in the real world, the customer service application will need to call the Auth0 API to add the impersonated user’s ID to the admin’s user meta data.

From there we can navigate to the user application and a new token will be issued.

If the user application sees that the impersonated user ID is present, you will call the auth0 management API again to get the impersonated user’s information and permissions.

From there you will create the impersonated user-decorated object and work in the application.

Summary

We were able to implement this solution without modifying the user object.

And because we’re using Design patterns to solve the problem, it will invariably make it easier to maintain your application.  

The decorator pattern allowed us to elegantly extend the concrete implementation by injecting it into the decorated object.

And, for further reading/additional resources, check out the source code and a list I compiled below on design patterns and how to use them effectively:

Source Code

Source for this Demo

Resources

Design Patterns – Elements of Reusable Object Oriented Software

Head First Design Patterns – Building Extensible & Maintainable Object-Oriented Software

Youtube – Decorator Pattern – Design Patterns (Christopher Okhravi)