Technologies – Aspire https://aspire.jo IT Services Partner Thu, 26 May 2022 07:49:49 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.14 https://aspire.jo/wp-content/uploads/2022/05/favicon-150x150.png Technologies – Aspire https://aspire.jo 32 32 Aspire partners with UiPath to accelerate Intelligent Automation https://aspire.jo/blog/technologies/aspire_partners_with_uipath/ https://aspire.jo/blog/technologies/aspire_partners_with_uipath/#respond Tue, 20 Apr 2021 10:54:27 +0000 https://aspire.jo/?p=13413

Aspire And UiPath Announce Strategic Partnership To Empower Organizations With Intelligent Automation

In response to changing market trends, Aspire continues to expand its digital ecosystem of technologies to deliver the best-in-class digital solutions to our clients. We partner with strategic and dynamic software leaders who share our vision for a more integrated future.

Aspire announces a strategic partnership with RPA software provider UiPath. The partnership serves to combine Aspire’s Intelligent Automation consulting and implementation offerings with UiPath’s leading software solutions for enterprises looking to become more digital.

UiPath creates intelligent software robots that automate transactional processing, data manipulation, and cross-platform communication. This technology enables more sophisticated automations with AI capabilities such as document understanding and provides sophisticated analytics to measure the business impact of automation. This holistic ‘automation first’ approach is proven to be both substantial and transformative and is allowing everyone to collaborate and put automation squarely at the core of everyday work.

“Our partnership with UiPath enables us to accelerate our clients’ digital transformation journey. Aspire’s proven experience and excellence, coupled with this innovative technology will help our clients transform their business processes, and maximize their success.” says Abir Ghosh, Director and Head of Emerging Technologies and Partnerships at Aspire.

As part of this partnership, Aspire becomes a licensed reseller for UiPath software, offering a single purchase-point for organizations seeking services and software. Aspire also becomes an Implementation Partner enabling delivery and ongoing support for organizations that have selected UiPath as their RPA tool of choice to replace mundane tasks with automated processes, and deploy digital workforce.

About Aspire

Since 2002, Aspire, headquartered in Amman, Jordan, provides high quality IT Services to a large number of clients in the USA, Jordan and other countries around the globe. We have been a partner of choice for clients in Digital Transformation and IT Services. Our success has been driven by our customer-focused approach and commitment to building and maintaining long-term partnerships with our clients in the e-commerce, media, banking, telecom, and healthcare and wellness.

Aspire’s knowledge and experience with proven tools and frameworks allows for clients to achieve a higher return on investment and effectively deploy staff resources, while supporting flexibility and corporate objectives.

To learn more how Aspire can help accelerate your intelligent automation journey, Contact us

About UiPath

UiPath is the leading end-to-end platform for automation, combining the leading Robotic Process Automation (RPA) solution with a full suite of capabilities and techs like AI, Process Mining, and Cloud to enable every organization to rapidly scale digital business operations. More than 700 enterprise customers and government agencies use UiPath’s Enterprise RPA platform to rapidly deploy software robots that perfectly emulate and execute repetitive processes, boosting business productivity, ensuring compliance and enhancing customer experience across back-office and front-office operations.

Based in New York City, UiPath’s presence extends to 14 countries throughout North America, Europe and Asia. With a thriving RPA developer community of more than 120,000 worldwide, UiPath is on a mission to democratize RPA and support a digital business revolution. To learn more about UiPath, please visit https://www.uipath.com

]]>
https://aspire.jo/blog/technologies/aspire_partners_with_uipath/feed/ 0
An Introduction to RPA https://aspire.jo/blog/technologies/rpa-part-3/ https://aspire.jo/blog/technologies/rpa-part-3/#respond Tue, 22 Dec 2020 12:14:42 +0000 https://aspire.jo/?p=13336

An Introduction to RPA 

Welcome to ‘Part 3’ of our Intelligent Automation blog series. If you haven’t read ‘Part 2’ yet, please click here. 

In this part of our Iblog series, we will spot the light on Robotic Process Automation (RPA) and how it has changed the way organizations work. 

Gartner defines Robotic Process Automation (RPA) as “A productivity tool that allows a user to configure one or more scripts (which some vendors refer to as ‘bots’) to activate specific keystrokes in an automated fashion. The result is that the bots can be used to mimic or emulate selected tasks (transaction steps) within an overall business or IT process. These may include manipulating data, passing data to and from different applications, triggering responses, or executing transactions. RPA uses a combination of user interface interaction and descriptor technologies. The scripts can overlay on one or more software applications.” 

At the core, RPA emulates human interactions with systems and executes them more quickly, accurately, and uninterruptedly in order to automate rule-based and repetitive processes. We can think of RPA as a digital worker that can interact with any system or application and capable to perform the following actions:

Since bots can utilize the user interface to work, there is no need to change systems or applications in order to automate a process. Furthermore, bots can work with systems and application in two ways: 

  1. Attended bots: RPA bots which sits on the users’ machine and interacts with them to help with task execution and to boost productivity.   
  2. Unattended bots: RPA bots that can run a process independently and on its own in order to achieve end-to-end automation.  

RPA has been able to make a reputation of being a relatively simple technology, easy to use, and quick to deploy, however, it is crucial to select the right processes for RPA in order to tap into the huge potential of this technology and to sense the benefits of deploying it in a digital workforce.  

There are best practices and clear strategies within the RPA framework to pick the processes most suitable for RPA. Here are some of the characteristics for an RPA candidate process: 

  1. A rule-based process that follows a logical sequence.  
  2. A highly repetitive process with high-transaction volumes 
  3. A stable process that depends on structured or semi-structured data.  

Organizations across all industries can benefit from automating rule-based and repetitive processesRPA enables finance, banking, insurance, healthcare, and many other sectors to transform their operations and to realize measurable outcomes and benefits such as: 

  1. Reduced Costs: Reduced spending on FTE for back-end processes opens up opportunities to reinvest or take efficiency dividends. 
  2. Increased Quality: Quality of outputs is increased as chances of error are reduced significantly. 
  3. 24/7 AvailabilityBots operating 24/7 can reduce workload peaks and improve response time. 
  4. High Scalability: Bots can be scaled up and down as required – for example, to manage high volumes in peak times of the year. 
  5. Increased Productivity: Staff can be liberated from mundane and repetitive processing tasks and focus on value-added tasks. 
  6. Increased Compliance: RPA tool provides full audit trail of processes performed and those that are rule-based. 
  7. Non-Invasive Technology: No need to change the underlying systems or technology as RPA is deployed on top of the systems and applications. 
  8. Insights & Analytics: All activities are captured and visual dashboards can be created to identify areas for improvements.  

According to McKinseyautomating business processes through RPA can lead to a return on investment between 30-200% in the first year after implementation, and to 20-25% savings on average. 

An optimal utilization of RPA for maximum gains is to run the repetitive, rule-based processes by the digital workforce, and to allocate the human workforce for the tasks which require human strengths such as human judgement and complex reasoning, emotional intelligence, or communication skills. 

Let’s take an example of bank statement reconciliation process. It’s a common use case across many organizations under the finance function and it ticks all the boxes for the RPA process selection criteria. The core of the bank statement reconciliation process is to reconcile the organization bank account balance against its cash account balance to understand the current cash position at any point in time. It is rule-based, repetitive, and high-transactional process where accuracy can’t be compromised. 

In a manual bank statement reconciliation process, staff performs the following activities: 

  1. Accesses the online banking and downloads the account statement for a certain period of time. This activity is repeated for each bank and each account.  
  2. Opens the core system and generates the cash balance sheet report for the same account and for the same duration.  
  3. Manually reconciles the transactions in the bank statement report against the transactions in the cash balance report based on a certain criterion such as transaction amount and transaction date.  
  4. Prepares a report for the reconciliation results (reconciled transactions, unreconciled transactions) and sends it to the concerned party for review and action.  
  5. Repeat steps 2 to 4 for each bank statement.  

When it comes to checking and matching transactions manuallythere are a few challenges which can be encountered especially for organizations that deal with a bigger number of account types, institutions, payment types, time zones, and payment complexities. These challenges include:

RPA can bring automation to the manual bank statement reconciliation where a bot can perform the repetitive tasks of reconciliation and can work with the systems just like a human does, but more quickly, accurately and with zero margin for error, even for huge volumes of transactions. As a result of this automation, a range of benefits can be introduced like: 

Similarly to the robotic bank statement reconciliation process, organizations can recognize the same benefits of RPA with many other processes and functions within several sectors. Below we mention some of the most common RPA use cases: 

Banking: 

      • Customer digital onboarding 
      • Card activation  
      • Compliance screening and KYC 
      • Outward transfers processing  

Insurance:  

      • Create/update/delete policy  
      • TPA reconciliation  
      • Quote generation  
      • Payment and receipt vouchers 

Finance:  

      • Bank statement reconciliation 
      • Vendor statement reconciliation 
      • Exchange rates real-time update 
      • Invoice processing  

Human Resources 

      • Employee onboarding  
      • Time and attendance management  
      • Payroll processing  

IT 

      • Reset password 
      • User management  
      • Periodic backup 
      • Email related tasks 

In the recent years, many organizations have started to see RPA as a key component of their digital transformation strategy. While the definition of RPA is straightforward, the concept and mechanism can be easily misunderstood.  

In order to implement RPA in a proper manner, we will have to understand the overall RPA implementation roadmap. The below figure describes the stages of the RPA roadmap and the main activities required in each stage:

The RPA Roadmap can arm organizations with the information and strategies they need to set a solid foundation for the RPA journey. Aspire can help organizations effectively use RPA to boost productivity, improve accuracy and grow the business.    

You can successfully implement RPA when you have the right team with you. Aspire is committed to devote its knowledge and experience to setup the proper implementation framework and to advance on a transformative RPA initiative. We empower organizations to implement, run, and scale up a digital workforce with the aim of saving costs and attaining business continuity. Are you considering automating your repetitive business tasks but don’t know where to begin? Speak to our RPA team today, we are here to help you every step of the way. 

Written by: Mohammad Keswani, Manager of RPA Solutions

]]>
https://aspire.jo/blog/technologies/rpa-part-3/feed/ 0
What about BPA? Do you know what Business Process Automation is? https://aspire.jo/blog/technologies/what-about-bpa-do-you-know-what-business-process-automation-is/ https://aspire.jo/blog/technologies/what-about-bpa-do-you-know-what-business-process-automation-is/#respond Wed, 11 Nov 2020 12:07:24 +0000 https://aspire.jo/?p=13292

Welcome to ‘Part 2’ of our Intelligent Automation blog series. If you haven’t read ‘Part 1’ yet, please click here.

As promised in Part 1, we will dive deep into each pillar of Intelligent Automation (IA). Staying consistent with the logical and historical development of automation, we will start by exploring Business Process Automation (BPA) first and then moving onto the other 2 pillars of IA: Robotic Process Automation (RPA) and Artificial Intelligence (AI) in our upcoming blog articles.

Recalling the definition of Techopedia for Business Process Automation (BPA): “it is the process of managing information, data and processes to reduce costs, resources and investment.” BPA increases productivity by automating key business processes through computing technology.

BPA aims to streamline data-driven processes by using technology to route information to the right person at the right time for decision making and taking actions. BPA helps organization to attain numerous benefits like:

  • Improve operational efficiency
  • Save time and efforts
  • Improve customer experience
  • Minimize human involvement in menial tasks and support knowledge workers
  • Ensure the application of best practices and create real time transparency
  • Create an improved culture for collaboration and innovation

The point of BPA is to accelerate how work gets done through a variety of software tools under an integral architecture. These tools cover the main elements of BPA in order to ensure an effectual process automation. The main 2 elements of BPA are:

  • Data
  • Business Rules & Logic

1) Data: is an important asset to any organization. Industry experts have been raising awareness on the importance of data through many publications that talk about “Data Economy” and that “Data is the new oil”. Essentially, there are 2 types of Data, Structured Data and Unstructured Data. According to Datamation, “structured data is comprised of clearly defined data types whose pattern makes them easily searchable; while unstructured data – “everything else” – is comprised of data that is usually not as easily searchable, including formats like audio, video, and social media postings.”

i) We can think of Structured Data (also called Quantitative Data) as any data that comes in a pre-defined format and straightforward to analyze like dates, phone numbers, point of sale data, and weblog statistics which conforms to a tabular format like excel files, SQL databases.

ii) On the other hand, Unstructured Data (or Qualitative Data) does not follow a pre-defined model and not organized in any certain manner. Unstructured data like documents, emails, newsletters, reviews and social media may include texts, numbers, and dates but with irregularities that make it difficult to analyze and to make sense of it.

2) Business Rules & Logic: Business rules can be described as the instructions and constraints which create policies to control the behavior of the business. Business logic is the action steps to transform policies into processes to enable the business achieve its goals.

A successful BPA solution must offer a platform to manage content and extract useful data from several sources in a variety of formats (physical and digital documents, emails and fax), then to route it around business via an automated workflow for relevant actions as part of a process. In addition, to empower decision makers by delivering dashboards and reports that give a visual representation of live data. Such holistic BPA solution can form an effective correlation between data (structured and unstructured) and business rules and logic to allow businesses to centralize their work processes and improve operations and customer experience.

Organizations realize the value of BPA, however getting started with the automation journey can be challenging. The points below can help to succeed in automation:

  1. Have a good insight on the business activities, their frequencies, and responsibility.
  2. Set business main goals and priorities.
  3. Identify the right processes for automation. Start with processes which are repetitive, time consuming, and has high business value.
  4. Select the right automation tool.
  5. Create a backup plan. It’s always better to have a plan B.

In most cases, there are situations where the process has to be improved to automate it in the best way possible. Process improvement can include merging steps to simplify the process, removing redundant steps, standardizing the process, and forming an outline for continuous improvement.

While every industry has different business needs, BPA can help any industry to benefit from the automation of as many manual processes as possible. Let’s zoom in on a common use case of ‘Accounts Payable’.

Accounts payable (AP) process is one of the most critical processes in a business. AP is the processing of payments owed by the business. In a manual AP process, the company receives paper-based invoices and routes it manually for actions and approvals to ensure that these invoices are legitimate and accurate for processing the payment.

Normally, the traditional AP process consists of the following manual activities:

The manual AP process completely relies on the physical movement of invoices and supporting documents and produces many challenges such as:

  • Long processing time and slow responsiveness which leads to delays in payments.
  • Lack of ability to store, manage and track documents and information.
  • Manual data entry and increased possibility of human errors.
  • Inability to have real-time visibility on payment status and to share information with decision makers or suppliers.
  • Exposure to fraudulent information.

With BPA, the AP process activities can be optimized and reduced to be as follows:

A sound end-to-end solution can streamline the AP process by providing a channel to receive digital copies of the invoices, capture data from the received invoices and documents by using an Optical Character Recognition (OCR) tool, automatically verify the information and validate it against the respective PO or agreement, and then route it via a smart workflow for automatic payment with the ability to digitally stamp or sign the invoices.

BPA can help organizations overcome the struggle with delayed payments and frustrated suppliers, and to achieve many benefits. Benefits include:

  • Elimination of manual data entry, invoice duplicates and human errors.
  • Digital storing of documents which can be easily managed and tracked.
  • Real time visibility into valuable information, process metrics and tasks statuses.
  • Streamlined approval processes with less turnaround time.
  • Reduced costs of printing and physical storage.
  • Possibility of early-pay discounts and optimized cash flow.
  • Increased process control and easily compliant to regulations.

If you are looking for Intelligent Automation guidance, start your transformation today with Aspire as your Trusted Partner.

In the next blog, we will talk about the rise of Robotic Process Automation (RPA) and how it has changed the way we work. Don’t miss it.

Written by: Mohammad Keswani, Manager of RPA Solutions

]]>
https://aspire.jo/blog/technologies/what-about-bpa-do-you-know-what-business-process-automation-is/feed/ 0
Do you know what Intelligent Automation really is? https://aspire.jo/blog/technologies/do-you-know-what-intelligent-automation-really-is/ https://aspire.jo/blog/technologies/do-you-know-what-intelligent-automation-really-is/#respond Tue, 27 Oct 2020 11:30:19 +0000 https://aspire.jo/?p=13259

Automation has come a long way as we developed applications and technologies to automate business processes such as Enterprise Resource Planning software (ERP), Customer Relationship Management software (CRM), Web/Mobile apps and any solution for a specific business need. Nevertheless, the reality is that businesses still rely on humans not only for critical decision making but even for mundane repetitive tasks restricting productivity, quality and customer experience to the capability of the human workforce.   

As a logical next-step, organizations started to adopt Business Process Automation (BPA), Robotic Process Automation (RPA)and Artificial Intelligence (AI) to eliminate manual efforts, increase productivity, quality, and to lower operational costs.  

But before we explain what BPA, RPA and AI can do, let’s start with the basics and the definitions: 

Techopedia defines Business Process Automation (BPA) as “the process of managing information, data and processes to reduce costs, resources and investment. BPA increases productivity by automating key business processes through computing technology.”  

According to Gartner, “Robotic process automation (RPA) is a productivity tool that allows a user to configure one or more scripts (which some vendors refer to as ‘bots’) to activate specific keystrokes in an automated fashion. The result is that the bots can be used to mimic or emulate selected tasks (transaction steps) within an overall business or IT process. These may include manipulating data, passing data to and from different applications, triggering responses, or executing transactions.”  

Investopedia states that, “Artificial intelligence (AI) is the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.” 

While RPA remains valuable and a must-have technology for any organization, there is still a need for a strategic approach to embrace automation and to carry on with the digital transformation journey. That’s where Intelligent Automation (IA) comes in! IA is the term used to describe the combination of Artificial Intelligence (AI) and Automation for the holistic implementation of digital transformation. Intelligent Automation is not a stand-alone tool, but it connects different technologies to empower end-to-end process automation.  

In this new series of blog posts on ‘Automation’, we will tell you about IA, so that it can be a guide for your digital transformation journey.

Part 1 

Businesses have been employing technology to improve growth and to gain a competitive edge while facing rapid changes and customer expectations that have become higher than ever before. It is worth noting that the dramatic – and most likely permanent – impact of the COVID-19 pandemic has given organizations a greater reason to look for more innovative ways to keep their operations up and running. Now that the physical footprint became discretionary and working with less human capacity transformed into a policy, organizations have started to adopt IA to implement innovative solutions and to save their businesses.  

With the other definitions already mentioned above, these 3 main pillars below broadly define IA and help shape the right Intelligent Automation strategy:

The adoption of the 3 technologies mentioned above and by using advanced tools within a consistent framework can enable organizations to have multiple benefits!

The analysis and research community believes that IA will be a core part of the future operating model. Deloitte research reveals that “organizations currently scaling intelligent automation say they have already achieved a 27% reduction in costs on average from their implementations to date”. Another research from McKinsey back in 2017 already revealed that “companies across multiple industries that have been experimenting with IA are seeing 20-35% annual run-rate efficiencies as a result of automating 50-70% of tasks”. We believe these figures will just keep on growing throughout the years and with further adoption of IA. 

In line with the global trend, Automation and AI-driven technologies are making its presence felt in the Middle East. The increasing demand on e-commerce and the thrust to reduce cost and to increase operational efficiency by oil dependent economies have created significant traction for RPA. 

Banking, Financial Services, and Telecommunication have started their RPA journey as early adopters in the Middle East. Having software bots to run repetitive, low-value and time-consuming tasks has enabled these sectors to revamp their business model and quickly realize the benefits of RPA. This has encouraged other sectors like Healthcare, Retail, and Oil & Gas to join the wave of Automation and AI towards a more IA-driven future.   

On a side note, please keep in mind that IA is not about creating a workflow engine or building a web application, it is about deploying the right technologies in the right way to help organizations focus on the high-value areas of their offerings and to discover new business opportunities. 

Sounds easy, but how does it really work?   

As we always do at Aspire, we will go the extra mile and create a roadmap for IA implementation. In the next blog articles, we will dive deep into each pillar of IA to show how theory can turn into practice.  

Stay Tuned! 

Written by: Mohammad Keswani, Manager of RPA Solutions

]]>
https://aspire.jo/blog/technologies/do-you-know-what-intelligent-automation-really-is/feed/ 0
The Rise of Digital Transformation in the Current COVID-19 Scenario https://aspire.jo/blog/technologies/digital-transformation/ https://aspire.jo/blog/technologies/digital-transformation/#respond Thu, 07 May 2020 13:08:40 +0000 https://aspire.jo/?p=12928

The impact that digital technology is having on business and commerce worldwide is revolutionary. IDC puts a figure of $18 trillion on the combined added value businesses have gained from digital transformation to date, while Gartner has predicted that, by the end of 2020, digital would account for more than a third of commercial revenues.

Digital transformation has been the direction of many organizations for the past decade and has been on the agenda of many companies for a while now. Unfortunately, some companies left digital transformation at the bottom of their priorities list. This isn’t the case anymore! With the spread of the new Coronavirus, many countries under lockdown and a global economic slowdown, we have seen that organizations that had previously implemented a digital transformation strategy, have been better able to cope up than the ones who resisted going digital.

Time to dispel the misunderstandings and misgivings and clarify the meaning and value of digital transformation. So, what is all the fuss actually about?

Digital transformation is a strategic journey of organizational change that starts with creating a highly motivated, self-managed and empowered team that is given the methods and tools enabling it to create a culture of innovation powered by data driven strategies.

Digital Transformation doesn’t mean the creation of a new website or the use of a certain technology. It is utilizing and adopting new technologies in order to help existing organizations adapt to ever-changing consumer behaviors while creating a flexible and sustainable competitive advantage to keep up with the fast pace of change.

For organizations to start being digital, their strategy should be built with the four foundational pillars in mind:

For organizations to start being digital, their strategy should be built with the following points in mind:

  • Create an innovative culture: enable employees to be creative. This will open up an organization to create and deliver its best value proposition.
  • Adopt Agile: creating value stream based (product-based or service-based) teams that align with the organizational strategies and objectives driven by customer feedback and market changes, allowing learning from implementation rather than assumptions. Granting more autonomy to teams that are driven by objectives will get products out faster which will bring early ROI and reduce the risk of building the wrong products while sparking new ideas to boost the culture of innovation.
  • Have the right technologies in place: a fast delivery will need proper technology to support it. Make sure to keep up-to-date with technologies that enable faster deployment, decoupled to eliminate dependencies and guarantee higher quality of outputs.

Data = Gold: Implement analytics tools as this will help gather enough data to understand customer behaviors and how to make more informed business decisions.

During the digital transformation journey, an organization will usually need to overcome these six stages:

Digital is in Aspire’s DNA. We offer our services in order to help other organizations start or complete their digital transformation journey. Aspire has a solid record of supporting customers in utilizing leading industry practices and solving their business challenges through transforming their processes and relevant digital technologies. We connect our clients to their customers by analyzing and aligning their data, systems, and processes to create mutual value. We help to understand the maturity of current processes and IT applications in order to adapt to the current challenges, through increased agility & efficiency in processes and digital platforms.

Agile adoption can help organizations shape processes, build a competitive business strategy and create an innovative team culture. Decoupling technologies can help organizations to be flexible and increase their speed to market. DevOps practices help organizations to build and enhance their digital solutions rapidly while ensuring quality. Analytics tools can help organizations to leverage data and take data driven decisions. The current crisis has been a wake-up call for organizations that have focused too much on daily operation needs instead of thinking about long-term digital business needs. Businesses that can invest and shift to digital platforms might help mitigate the impact of the coronavirus outbreak. Let us help you to keep your company running smoothly and seamlessly; our Digital Transformation offering includes:

We are all in this together and we will survive this crisis by maintaining a business contingency plan, thriving in the digital era and providing the best possible customer experience. Transforming to an innovative digital organization means to win by pioneering any conceivable marketplace enabled by the adopted technologies, start your transformation today with Aspire as your Trusted Partner.

Written by: Mohammad Taffal, Agile Coach/Scrum Master.

]]>
https://aspire.jo/blog/technologies/digital-transformation/feed/ 0
Why Kubernetes and Containers Have Become Essential Tools for DevOps https://aspire.jo/blog/technologies/why-kubernetes-and-containers-have-become-essential-tools-for-devops/ Tue, 17 Dec 2019 08:43:24 +0000 http://58i.bcc.myftpupload.com/?p=11663

In the modern IT world, software delivery cycles are getting shorter and shorter, while the size and complexity of applications is increasing all the time. Not only that, but we are now operating in a digital multiverse –  software isn’t just running on individual computer end points or on-premise servers anymore, but on a multitude of public clouds such as AWS, Google Cloud Platform and Azure, IaaS private clouds and any number of hybrid combinations of all of these.

For developers, this means there is pressure to code bigger and better programs for more and more environments at an ever-increasing speed. For IT operations teams, it means juggling configurations, rollouts, updates, maintenance, load balancing and more across varied and complex system and network architectures. No wonder agility and efficiency have become buzzwords across the industry.

Enter DevOps.

DevOps is a software production methodology that seeks to unify the development and operational management of software. Instead of having programming teams throw an application over the wall to their colleagues in ops who then have to deal with whatever problems arise in deployment, DevOps seeks to make what goes on when you run an app ‘out in the wild’ part and parcel of the developmental thought process, and vice versa.

The aim of DevOps is therefore to reimagine software production, from concept through development to deployment, as a single streamlined end-to-end process, with everyone, developers, testers and ops team, working together in harmony and automated processes driving speed and efficiency. Through collaboration and rationalised workflows, companies can deliver bigger and better apps for every environment they need on shorter and shorter timescales.

Image by Dirk Wouters from Pixabay

However attractive this vision of lean, agile, continuous production is, making it a reality depends on more than cultural change and how IT teams are organised to work together. There are technical challenges, too. One of these is how to marry application configuration with infrastructure configuration, especially when looking to deploy an app in multiple environments in the cloud.

This is the story of how two technologies, Containers and Kubernetes, have become essential tools for DevOps teams looking to streamline configuration from development through to deployment across multiple platforms, and the value this adds to businesses.

Containers: Accelerating Innovation

Software developers have always faced issues when it comes to making programmes suitable for different platforms and infrastructure environments. Say you want to make an app that can run on Windows, Mac and Linux. In the past you might have had to script three different versions, one for each OS, because Windows, Mac and Linux all interpret and execute code in different ways.

Nowadays, of course, you have mobile iOS and Android to throw into the mix. And it isn’t just operating systems that developers have to worry about. Different IT infrastructures, whether it is running an app on a bare metal service, on a private cloud or one of the various public IaaS services, affect how applications perform in different ways, because of factors like load balancing and routing across different network architectures. This leads to what you might call infrastructure lock-in, with a single script having to be reconfigured for every single environment it runs on.

When your aim is to be able to roll out scripts at high speed across many different environments, this is highly inefficient and places a huge burden on operations teams. It’s exactly the sort of problem a DevOps approach seeks to solve, by addressing infrastructure lock-in at the development stage. But how can that be achieved in practice?

Containerization has been something of a game-changer for DevOps because it enables a “write once, run anywhere” approach to development. Containers are a type of virtualization. But whereas virtual machines like VMware work by creating abstract versions of complete servers which include their own operating system, containers abstract higher up in the stack at the application layer.

Image courtesy of https://cloud.google.com/containers/

One of the consequences of this is that containers break the intrinsic link between application and infrastructure. The name ‘container’ is itself an analogy borrowed from shipping containers – boxes built to a standard dimensions which make it easy to move the goods within from one mode or transportation to another.

In software terms, containerization similarly makes application deployment across different environments much more agile and efficient Developers can take a script, either for a full app or for a particular function of an app or just for a small patch, bundle it in with all the configuration files (including APIs), libraries and dependencies it needs, and end up with a lightweight, self-contained, self-sufficient asset that can be run anywhere. Packaged like this and deployed using a container platform like Docker, scripts can run successfully on multiple infrastructures, physical and virtual, solving the problem of portability across different environments.

From a DevOps perspective, containers therefore help to solve some critical issues for both development and deployment. Developers don’t have to worry about re-scripting the same code for different production environments (like rearranging shipped goods for a different mode of transport), while production and operations are likely to face far fewer issues with configuration, especially when confronting multiple cloud environments. In addition, the portability of containers makes it much easier to pass scripts back and forth between development, testing and production, aiding collaboration. For the production of cloud-native applications, the increased speed and agility this results in helps to accelerate innovation – time previously spent on reconfiguring and bug fixing for different environments can now be spent focusing on upgrades and improvements.

Kubernetes: What it is and Why it Matters

Containerization, then, resolves some of the challenges facing DevOps teams when it comes to the ‘horizontal’ scaling of applications, or deployment across multiple platforms. By abstracting applications from infrastructure, and bundling in scripts for functions with code for ‘run anywhere’ configuration, containers help to build a bridge between development teams focusing on app configuration and operations teams focusing on infrastructure configuration, which often leads to conflicts in priorities.

But what about vertical scalability? For modern digital enterprises, how fast and effectively you can launch a single app across multiple environments is not the issue. What really matters is the speed and efficiency with which you can roll out a continuous stream of many different applications, at scale, which work no matter what the platform.

It is here that DevOps teams run into one major drawback with containers. While it is relatively straightforward to configure a single container to work reliably in any given IT environment, configuring containers to work together is much more complex. So if you have a launch cycle which involves, say, running updates and patches for 100 different applications at any one time, you still have to duplicate the configuration work in at least 100 different containers. And if you push further into a microservices approach, where single applications are sub-divided into separate functions or ‘services’, each run from a separate container, the complexity is multiplied even further.

When you’re dealing with app production at high volumes, or with large, complex, multi-container applications, coding and recoding configurations for every single environment in every single container remains a cumbersome and time-consuming task. To carry on the shipping analogy, it’s like loading containers manually – yes, the containers are better than unloading and reloading items individually, but it is still laborious, long-winded work.

This is why, for DevOps teams looking to take agility to the next level, Kubernetes has become a go-to solution.

Kubernetes is a container orchestration tool developed by Google. According to the Kubernetes website, it functions as “a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.” Its name is Greek for helmsman or pilot, referring to the fact that it supports teams in navigating their way through the management of containerization.

Kubernetes brand icon.

Google was an early adopter of containers. As well as the advantages of bundling in environment configuration with the application code during development, the IT giant recognized that breaking everything down into small, discrete, manageable chunks of code could have major benefits in terms of agility and productivity – the less developers had to focus on, the more they could achieve. The only problem was, running software services at the scale of Google search, Gmail, YouTube and the rest, Google soon realised it was having to juggle billions of containers with a continuous cycle of updates and deployment. Having broken everything down into bite-sized pieces, it needed a way to reassemble them all again, and move them where they needed to go, efficiently and effectively, at a global scale. Kubernetes was its answer.

Kubernetes can be understood as a workload automation resource for teams working with containers. It resolves some of the key challenges DevOps teams face when working with containers at any sort of scale and complexity – managing, scheduling and networking clusters of containers within a microservices architecture, identifying and fixing issues within individual containers in a cluster to improve redundancy, speeding up configuration and deployment when you are trying to update dozens of different component parts at once.

Kubernetes is sometimes described as being “designed for deployment”, providing a natural balance to the benefits containerization brings to development. There are certainly elements to Kubernetes which live up to this billing – for example, the fact that an application can be installed and run using a single command. From an operational perspective, the big advantage of Kubernetes is that it provides a ready-made tool set for managing the entire application lifecycle from deployment onwards, saving teams the trouble of building their own infrastructure configuration solutions and making automation standard.

However, like containers themselves, the benefits of Kubernetes are just as relevant to development as they are to deployment and operations, and it is therefore perhaps better to think of it as a solution ‘designed for DevOps’. The fact is, in a true DevOps team, it falls to developers to script solutions for operational issues before an application reaches the deployment stage, or at least come up with solutions to identified problems in subsequent iterations. From load balancing to examining logs to running system health checks to privacy and authentication, Kubernetes provides ready-made solutions, often run with a single command, to operational requirements that can pose major stumbling blocks for developers trying to get their applications production-ready, especially in multiple environments. Rather than worrying about how to make all of these functions work in different private cloud, public cloud, bare metal or hybrid environments, Kubernetes gives developers the space to focus on core application functionality – a little like they did in the pre-DevOps days.

Summary

In summary, containers and Kubernetes are often described as offering DevOps teams ‘best of both worlds’ solutions for efficient software production at scale – the flexibility of being able to decouple how an application performs from specific infrastructure, therefore enabling development for multiple environments, combined with a set of tools that make deployment and operational management as straightforward as they would be if you were only having to consider one environment.

On their own, containers deliver the twin benefits of re-engineering the relationship between application and infrastructure and also breaking down application functions into smaller modular units. With single configuration scripting, this enables “write once, run anywhere” scripting, and also leads to faster, more agile development, with more scope to focus on perfecting what the application does rather than problem solving how to make it run.

But when it comes to more complex, multi-container applications and large scale distribution, you also need a solution for managing how containers are integrated and deployed. By simplifying configuration and deployment in any environment using straightforward code, Kubernetes provides the portability and consistency that smooths the path for applications to pass through development, testing, sys-admin, production and operations without a glitch. Completely agnostic, Kubernetes will run containers coded in any language, intended for any platform, providing DevOps teams with supreme flexibility. And thanks to its use of automation, it helps to speed up DevOps cycles, supporting continuous development and, by removing the complexity of handling large number of containers at once, a microservices approach.

Posted by: Camila Panizzi Luz

]]>
Development at the Speed of Business https://aspire.jo/blog/technologies/development-at-the-speed-of-business/ Thu, 31 Oct 2019 13:31:48 +0000 http://58i.bcc.myftpupload.com/?p=11582

Time-to-market is a critical consideration for software developers and IT teams looking to roll out new technology products, services and platforms. Especially in the digital age, where markets are so much more fluid and fast-paced than they once were. It is increasingly important for businesses to capitalize on strong, innovative ideas quickly with rapid roll-outs and efficient production.

The longer it takes to get from concept to product, the more likely it is that a competitor will jump in first and steal an edge over you.

For big businesses, however, this increased focus on rapid development has posed a challenge. In terms of organization and structure, large enterprises are not necessarily set up with agility in mind. Whether the project is to develop a new customer-facing web or mobile app, or to engineer new internal IT systems, there are a lot of stakeholders to include and consult, a lot of due processes to follow, a lot of complexity to navigate in terms of operational and technological infrastructure.

There are cultural obstacles, too. Established companies will often have approached development in the same fixed, linear fashion for decades, starting with a business plan and pitching it for approval and backing (especially financial) before you even start on the actual project. The planning phase can take as long as the actual development. Even in software, the field where agile methodologies were first formulated and found favor, resistance has been frequently encountered in large companies. One study from New Zealand found that attempts by big software conglomerates to adopt agile practices were met with:

“General organizational resistance to change, lack of user/customer availability, pre-existing rigid frameworks, not enough personnel with agile experience, concerns about loss of management control, concerns about lack of upfront planning, insufficient management support, concerns about the ability to scale agile, need for development team support, and the perceived time and cost to make the transition.”

Over the past decade, however, things have started to change. Buffeted by increased digital disruption on a global scale, often spearheaded by innovative and nimble tech ‘unicorns’ that grow at astonishing rates on the back of highly agile business models, larger incumbent enterprises have started to change their point of view on development and production cycles. In many cases, it is a change driven by necessity – if everyone else is responding to changing market dynamics with frequent innovation and rapid product iterations, you risk being left behind if you don’t follow suit.

The Lean Influence

In 2011, entrepreneur Eric Ries published the book The Lean Start-Up, a piece of work he intended to serve as a handbook for other entrepreneurs drawing on his experiences with agile software development and lean manufacturing techniques. Central to Ries’s theories is the idea that start-up businesses cannot afford the lengthy and costly development cycles that large companies habitually employ. With neither the time nor the resources available to soak up failure after a long development, successful start-ups work in a different way. They get a version of the product out quickly (the now-famous Minimal Viable Product, or MVP), they get real feedback out in the field from real customers, they adjust the product according to what they learn, or else ‘pivot’ quickly away to another idea before they have wasted too much time and money on a non-starter.

What surprised even Ries initially was that it wasn’t just entrepreneurs who were attracted to his work. Big businesses, too, especially R&D and IT teams, began to take a great deal of interest in a methodology which, at its core, is about reducing time-to-market in development.

The key to understanding why lean and agile methodologies have become so attractive to big business is recognising how market conditions have changed. By contrast, as a consequence of globalization and other factors, a company nowadays can expect its very best ideas to face multiple competitors in the marketplace by the time they get to launch. Speed has become essential, and Ries argues that management systems have to change if large organizations are to respond effectively.

Another key reason why enterprises have started to embrace rapid, agile development is digitalization. Like Ries, McKinsey argues that getting the most from digital transformation requires companies of all sizes to embrace new approaches to managing change and development.

Writing in the Harvard Business Review, Steve Blank, a mentor and colleague of Ries’s, said that a critical difference between traditional and lean development approaches is that established companies tend to execute prefabricated strategies, whereas start-ups are more focused on finding a viable model or solution. The latter approach is more efficient because it minimises the risk of creating a product that no one wants, by listening carefully to what the market wants in the first place. By cutting out the long-winded guesswork involved in writing business plans, you get to market quicker, and by getting to market quicker, you make a better product. The hub of this cycle is innovation, the continuous spinning of the wheel which ensures product and market are in sync.

The Lean Start-up Cycle. Source:
https://www.researchgate.net/figure/Lean-Startup-cycle-Source-Figure-designed-after-Eric-Ries-cycle_fig6_312087505

Efficiency, speed, innovation: lean, agile, continuous improvement. It all ties together in a different way of thinking about development, one that matches the speed of the marketplaces businesses are now operating in.

Organisational Efficiency and Slimmed-Down Development Cycles

So how exactly are enterprises slashing development times in order to keep a step ahead of the competition and create solutions that genuinely respond to market demand or real-life business need?

For large businesses, perhaps the key shift has been in the organizational structures placed around development, particularly with regards to the relationship between IT teams, business units and customers. In the past, these were set up on very linear lines, with business teams conducting initial user/customer research, passing on feedback to IT leaders, who then went away and worked on the coding, more or less in isolation.

The lean/agile approach requires business units and product owners to take more of a co-development role in direct partnership with IT, sharing information on an on-going basis through a series of iterative cycles. This change in alignment towards closer collaboration is required to support the customer-focused development that agile methodologies demand, to allow the free flow of information from real users/customers ‘out in the wild’ back to development teams which allows them to efficiently test, adjust and re-test hypotheses until they get the product right.

Organizational efficiency can pose a major obstacle to large organizations implementing lean and agile development practices. On one project Aspire was asked to consult on, we encountered a highly heterogeneous IT environment made up of multiple legacy and third party applications, with a lack of integration negatively impacting on time-to-market for new products as well as on the overall customer experience. We streamlined operations as they related to development by implementing an Integration Test Strategy to reduce the time it took for products to go through testing one after another with a series of different business units, which was accompanied by a training and education programme to support the shift in culture this would represent. We were able to help the client achieve a 50% reduction in time-to-market for new services while operational costs were reduced by 25% thanks to a combination of more streamlined processes and automation.

Finally, one other example of the way that enterprises are changing structure to make development leaner and faster is the use of innovation labs. Eric Ries states clearly that he believes the traditional claim in business that ‘everyone should be responsible for innovation’ is a mistake. He argues that innovation should be treated as an area of defined responsibility, just like having a team responsible for UX on a website, or for back-end security on apps, or for transaction platforms. A lean enterprise, he argues, is built in a modular fashion, with small, agile teams given clear areas of responsibility and linked together to promote efficient information exchange and workflows. Innovation needs to be given a defined place within this structure like any other function. Development labs are one way to achieve this.

Large banks have been particularly enthusiastic about embracing the innovation lab model as they try to pivot away from traditional banking structures to embrace fintech and online financial services. Banks are creating innovation labs referred to as “start-up incubator hubs” not only for driving rapid, creative tech development within their own organizations, but nurturing next-gen solutions for the industry as a whole through partnerships with other fintech players.

Summary

Eric Ries’s ideas were aimed at making product launches less expensive and risky for start-ups by simultaneously shortening development cycles and ensuring the product was something people actually wanted to buy. But coupled with a range of other factors that include the rapid acceleration of digitalization and increased market competition, the core principles Ries wrote about have come to be taken very seriously in the world of enterprise software development.

Even for the very largest organizations, the faster you can get a website, application or digital product up and running and available on the market, the sooner you can gain the benefits from it, whether that be efficiency gains within your own operations, a better customer experience or grabbing market share with sales of a new product. Beyond reducing time to market in development, the lean, agile approach forces large businesses to adapt and streamline how they are organized to maximize efficiency, reducing costs, increasing output and promoting the value of innovation and continuous improvement. Finally, it delivers a better customer experience by making development responsive to demand and real-life use cases.

]]>
A/B Testing: Its Value for Lean Start-ups and MVP https://aspire.jo/blog/technologies/a-b-testing-its-value-for-lean-start-ups-and-mvp/ Wed, 09 Oct 2019 16:22:26 +0000 http://58i.bcc.myftpupload.com/?p=9804
Businesses have to juggle a variety of competing priorities when planning IT projects. Guaranteeing robust levels of quality and performance, achieving a final product that is attractive and intuitive to use and completing development and launch with minimum disruption to current operations all rank highly.

There is also a strong incentive for completing projects as quickly as possible, or at least getting new applications, platforms and systems up and running without delay. In the past, this objective was often sacrificed to the demands of quality, usability and minimal disruption. Development would take place in the background, away from the day-to-day workings of the business so there would be no interference, with the final product only rolled out on completion, potentially after many months of work.

The frustration for businesses was that this approach often proved expensive, with considerable resources diverted away from front line operations for extended lengths of time, and also delayed any value benefits the organisation could get from the new product.

This is one of the core issues which Lean, one of the Agile family of development methodologies, attempts to resolve. According to Lean, quality and usability can be combined with much faster roll out, giving a business the benefits of a new product sooner, without compromising current operations.

The key to this approach is the concept of the Minimal Viable Product (MVP), the most basic iteration of the final desired product that is still capable of delivering the core intended functions. An MVP can be launched in real-life environments in a fraction of the time it takes to complete a full development. The idea is that the company can be reaping some of the value from their new product in a matter of weeks, and can then refine and add to it in production, ultimately leading to a final version that is more carefully honed to actual use and operations.

This progression from MVP to final product, through a series of iterative cycles typical of all Agile methodologies, relies on the close integration of testing with development. Testing is used to assess the impact of added and refined features in parallel to a live system, with the best options able to be deployed very quickly to ensure highly efficient progress. There are numerous types of testing methodology used in Lean development. In this article, we will explore one of them – A/B testing.

Lean Start-ups and MVP

First, a little more background on the concept of MVP, where it comes from, its aims and its benefits. The origins of MVP can be traced back to Eric Ries’s influential book The Lean Startup, which sought to apply the principles of lean manufacturing, with its focus on eliminating all waste and inefficiency from production cycles, to product development for business start-ups.

The idea is that entrepreneurs looking to make waves in a new market face certain structural disadvantages compared to incumbents, which are often larger, well-established businesses. Working with minimal resources, startups need to be as sure as possible that their product will attract interest, and they need to make a virtue of their smaller size by showing greater agility in how they respond to market demands.

Ries proposed a model for how startups could achieve these objectives known as the Build-Measure-Learn loop. And it was at the ‘Build’ stage that Ries outlined the concept of a minimal viable product. He argued that startups were better off getting a version of their product to market as soon as possible, giving them a chance to grab market share and earn, rather than spending months in the development lab. The key then was to measure, or test. If your product proved popular, great – now you could assess ways to make it even better and add value. If it wasn’t having the impact you hoped for, it was time to pivot – to make whatever changes were necessary. All changes would be tested in the market, informing the learning stage which would then lead on to the next iterative build.

This simple model proved so effective that it quickly spread beyond the world of entrepreneurial start-ups. According to one survey, 82% of R&D and product development executives at blue chip US companies say they have implemented at least some aspects of Lean into their strategies. The same study also listed the top five benefits of Lean as:

  • Focusing decision-making on evidence and data.
  • Faster development cycles and quicker realisation of ideas.
  • Better-quality feedback from customers and stakeholders, based on what they buy rather than what they say in a focus group.
  • Speaking to and observing real customers and stakeholders.
  • Greater flexibility about making changes to ideas.

The Lean approach has been particularly successful in software development. Google famously launched its first search engine as a basic HTML page to get a feel for how web users would react to it. Successful tech startups like Dropbox and Pebble started out with MVP releases that were attracting pre-orders while still very much in the beta stage. We will return to some of these success stories later, but now let’s look at the process by which lean developers evaluate and refine their MVP into the final product – testing.

What is A/B Testing? What are the Benefits?

One way to understand the role of MVP in Lean development is to see it as the kickstarter for the testing and improvement cycle in a live environment. It gives developers their first tangible test, their first opportunity to assess performance and gather feedback which will inform the ‘learn’ process ahead of the next iteration.

But in order to learn anything meaningful from the launch of an MVP and all subsequent updates, developers must take a structured, quantitative approach to testing which provides measurable data they can base next step decisions on. There are a range of different methods used to test performance of products in the market and to inform decision-making for future iterations. But one of the methodologies advocated by Eric Ries himself in The Lean Startup is A/B Testing.

Ries describes A/B testing as “an experiment in which different versions of a product are offered to customers at the same time.” Digging deeper into how an A/B test works, the idea is to offer near-identical versions of the same product, but with one feature or function altered to offer two alternatives – A and B. By releasing both simultaneously, the development team can monitor user behaviour and preferences to establish which is preferred.

A/B testing can be done by giving users two options and seeing which proves the most popular, e.g. two different layouts on a web page (see diagram below). Equally, it can be done by splitting user groups into different ‘buckets’, allowing them to experience either option A or option B, and then gathering feedback to compare.

The main point is to only change one metric (i.e. a feature or function) at a time, so that you understand clearly the impact that one variable has on user preferences and behavior. This has two clear benefits over building complete versions of a product before launch. One, it makes it much more likely that your final version will be something your users respond well to, as you have chosen features and functions piece by piece according to evidence of real-world preferences. Second, it means that your product is out there ‘in the wild’ winning over early adopters, building the presence of your brand and generating revenue even as you refine it. With cycles of A/B testing, you are methodically improving usability and quality step by step, making it much more likely that you end up with a product people want to use and pay for.

A/B Testing and MVP in Action

A/B testing has perhaps become best known as a tool used by digital marketers and web developers who use comparisons between ‘live’ alternatives to tweak UX to follow the path of identified user preferences. For example, A/B testing is recognised as an effective means of improving rates of conversion from website hits to desired actions like making a purchase or registering an email address.

The principles of A/B testing can be applied to any type of product. But what makes it particularly well suited to digital assets like websites and cloud apps is the very rapid feedback and results developers can get, thanks to very short paths to market. That is why some of the best examples of A/B testing in action come from software development and digital start-ups.

Eric Ries describes one as providing him with inspiration for The Lean Startup. He describes the journey of Votizen, a US-based online voter organisation platform first developed in 2007. The product had a slow start in life, before David Binetti started A/B testing product features and design. Once he started testing and iterating, Binetti saw registration numbers climb, activation/participation rates rocket and, by 2010, he was able to secure $1.5 million in funding.

An excellent example of A/B testing in action is the story of mobile app Bounce, a handy personal organising tool that combines your calendar and your location to tell you when you need to leave for an appointment. Bounce’s developers effectively ‘crowdtested’ an optimum price point for the app by A/B testing different prices through crowd funding platform Self Starter – people weren’t just signing up to back the project, they were signing up to pay a specific price for it. They found, for example, that a higher proportion of backers would pay $10 compared to $5, telling them they could afford to set a higher price.

Finally, for an example of how the principles of MVP and A/B testing can be taken right to the heart of a company’s business strategy, we can single out Atlassian, the development firm behind enterprise collaboration apps like Trello and Jira. As a young startup offering its products on a SaaS ‘self-service’ model, Atlassian found that the key to cementing and developing its share of market was focusing acutely on user and customer experience, and being prepared to experiment continuously to find the best options available. The company’s methodology was to formulate hypotheses about what might work, and then simply test them by giving real customers A/B alternatives, thus gathering data to back up the original suggestion.

Find out more

At Aspire, our mission is to help clients develop high quality software solutions that answer real world business challenges and meet the needs of their customers and users, quickly, efficiently and with value in mind. We apply A/B testing and other aspects of Lean methodology across web application development, mobile application development, application migration and integration, software product development and more. To find out more about how we work and how we can help you, contact us today.

]]>
Agile Development: Choosing the Right Methodology to Improve Workflow https://aspire.jo/blog/technologies/agile-development/ Thu, 26 Sep 2019 10:01:15 +0000 http://58i.bcc.myftpupload.com/?p=9877

A little under two decades ago, a group of 17 software engineers met at a ski resort in Utah, USA. Their agenda was straightforward – they were disillusioned with the software development industry they believed was cumbersome, reactive, inefficient and more concerned with process than meeting the needs of end users. The group, which called itself the Agile Alliance, wanted change.

The result of that meeting was the publication of the Agile Manifesto, a clarion call for a new approach to software and systems development built around four core principles:

  • To prioritise individuals and interactions over processes and tools;
  • To value working software over documentation;
  • To focus on collaboration with customers over contract details;
  • To approach development projects with an adaptable, flexible mindset rather than a determination to stick to a preconceived plan.

It is this last principle calling for adaptability once development projects start that has come to define Agile methodologies more than any other, and which signal the most obvious break with so-called Waterfall workflows. First proposed in 1970, the Waterfall model describes a linear approach to development in a series of logical steps, whereby you can only progress to the next stage upon completion of the one previous.

For the Agile Alliance, this model was far too prescriptive and restrictive, and ignored the realities of developing solutions for living, breathing operations in the real world – requirements change, unforeseen problems occur, different and better solutions present themselves as you go. In addition, with its strict sequence of phases to be completed in order, Waterfall inevitably leads to drawn-out projects which can take many months to reach

Sprint cycles

Instead, Agile methodologies propose a cyclical model of development. The idea is that a single project might have many distinct cycles of development and iteration – known as ‘sprints’ – where the aim is to get a version of the application, or at least some of its functions, deployed and updated each time. Once up and running in a production environment, the product can be reviewed and assessed and changes made accordingly in the next iteration. It means clients get working versions of software up and running within weeks rather than months, and there is a process of continuous improvement to shape it according to their operational needs.

First proposed in 2001, Agile thus laid the foundations for major trends in software development such as DevOps, continuous delivery and design thinking. Coupled with parallel developments such as virtualisation and cloud computing, the influence of Agile can also be seen in approaches like headless and microservices.

Agile itself has continued to evolve as a concept, with the effect that it is now best understood as a family of methodologies rather than as a single approach. While the central goals of the various Agile models remain the same – rapid iteration, adaptability to change and partnership working with the end user – they differ in how they define and describe processes and sequences. Or, to put it another way, they differ in how they define and describe the workflow involved in a development project.

Adapting your workflow

This is actually extremely useful for developers and their clients alike. By emphasising different elements of the iterative workflow, different models within the Agile family offer different benefits and achieve different effects. This provides development teams with an extensive toolkit of options that can be used to achieve a wide range of deliverables, depending on what outcomes best suit the client.

As this blog from Cisco argues, the point is not to be agile for its own sake. The key is knowing how to be agile in the right way to achieve specific goals, to the benefit of both the service provider and the client. Speed of deployment, cost efficiencies, dynamic scalability, quality service and more may present themselves as operational priorities on different projects, or even at different times within the same project. Knowing how to adapt workflows to achieve those ends gives developers a competitive edge, and that can be achieved through understanding when and how to employ different Agile approaches.

For example, Scrum is a methodology which brings the concept of multiple, ongoing, incremental iterations to the fore – and where the concept of a ‘sprint’ originates from. The idea is to break down systems under development into separate functions, deploy each in a logical sequence and adapt priorities in close consultation with end users at the end of each cycle.

If Scrum focuses on workflows that deliver rapid, adaptable iterations, Lean Software Development and the related Kanban models highlight cutting waste from workflows and therefore driving efficiency gains. Lean uses value stream mapping to identify and prioritise the most valuable features of a system, empowering individuals and small teams to work both collaboratively and independently to maximise human resources. Kanban adds the element of workflow visualisation into the mix to help teams keep sight of the bigger picture and maintain an optimum work-in-progress balance – neither overstretched or underutilised.

All Agile approaches emphasise the importance of collaborative working between developer and client to ensure products are developed in response to real need. But this is especially important to the Extreme Programming model, which aims for exceptionally high levels of quality and responsiveness by placing customer feedback at the heart of continuous development cycles.

To give one final example, Crystal is actually a sub-family of Agile methodologies in its own right, and the focus of these is adapting workflow to suit the make-up of your development team. The idea is that teams of different sizes and skill sets require different approaches to optimise how they work together, so understanding the process dynamics of teamwork adds another element to how you can understand.

In summary, the ethos of the original Agile manifesto was to advocate developmental approaches which prioritised speed, client needs, collaboration, quality outcomes and, of course, adaptability. It is fitting that many different approaches to achieving these ends have evolved, each with a slightly different focus, to arm developers with a range of different models for how to adapt workflow to achieve different ends.

To find out more about how Aspire uses Agile methodologies to develop robust, flexible and scalable solutions for our clients, contact us today.

]]>
Combine Framework, First Impression. https://aspire.jo/blog/technologies/combine-framework-first-impression/ Mon, 23 Sep 2019 08:17:28 +0000 http://58i.bcc.myftpupload.com/?p=11570

Here at Aspire, we pride ourselves in keeping up with the latest technologies, frameworks and releases, in order to share our best practices and experience with our clients. Our very own Senior iOS developer, Ahmad Fayyas wrote a great article on Medium about his first impressions of the Combine Framework. Check it out below:

A while ago, I published an article about my first impression for SwiftUI framework. In the article, I mentioned that SwiftUI could be more than a UI handler for our projects. After diving a bit deeper in SwiftUI (and Swift 5.1), I recognized that States and bindings are (obviously) really nice. By using some of the provided property wrappers such as @State, @Binding or @ObjectBinding, we are able to connect our views with our data models very easily and more expressively.

One of the new topics that I noticed, was that in order to declare a @ObjectBinding variable of your custom view model type, you have to let it conform to BindableObject protocol. After that, you can then implement didChange property, which is of type PassthroughSubject. If we trace the PassthroughSubject hierarchy, we’ll find out that it is a class, which is a concrete type of Subject protocol, that conforms to Publisher protocol – this all means that it is basically a Publisher. Okay, so what is this all about ?????

Are we getting far away from SwiftUI?

The answer is yes. When reviewing the Publisher documentation, if you have sharp eyes (not really, it’s clear enough ????), you’ll notice that it relates to the family of Combine framework:

This is not a tutorial of coding with Combine framework, it is just an iOS developer’s first impression about how we’ll deal with it.

 

Hello, Combine!

Apple simply describes Combine as:

‘Customize handling of asynchronous events by combining event-processing operators’.

Honestly, my very first impression when I started reading the used “technical terms” in the framework documentation such as Publisher , Just Subscriber , Subscription, Operators , Cancellable , Scheduler , my mind automatically connected them as keywords with the world of FRP!

So, YES, if you are familiar with one of the FRP frameworks such as RxSwift or ReactiveCocoa then congratulations! Now you know the main reason of the Combine framework existence. Currently, we can say that Apple does support the FRP paradigm natively without the need for dealing with a third-party framework for building our projects ????.

 

Finally, Apple!

But wait… What if I have no idea what FRP is?

Well… I would say it is a good opportunity to find out ????.

Basically, dealing with Functional Reactive Programming (FRP) let’s you worry less about managing data and allows you to concentrate on how your apps should work. Here are some of the points that describe the meaning of “worry less about managing data”:

  • The parts of your app might be affected.
  • The amount of the (“tedious” in some cases) boilerplate code that you might need to implement (Hello Delegates, Targets-Actions, KVOs …!).
  • Caring about synchronous/asynchronous changes, and how to connect their impact to the default app’s data flows.

Additionally, it might be worth mentioning that when comparing FRP with the usual standard approach(es), most of the time you’ll need to write less code to achieve the same results; It’s about performing things “declaratively”.

Less code?!

To clarify, take a look at one of the most popular cases we might achieve in our apps, which is dealing with multiple asynchronous calls. Obviously, we should think about one of the concurrency techniques, one of the proper choices would be the GCD DispachGroup. Whether the tasks are on the same queue or on different queues, we are still able to observe the execution and completion of them in the group. Example:

 

<let queue = DispatchQueue(label: “reverseDomain”, attributes: .concurrent)

let group = DispatchGroup()

queue.async (group: group) {

performAsync01()

}

queue.async (group: group) {

performAsync02()

}

queue.async (group: group) {

performAsync03()

}

group.notify(queue: DispatchQueue.main) {

// tasks executions are finished

}

 

Keep in mind that the output of the asynchronous task might be a returned value, therefore, in addition to the above code, we might need to declare an instance variable to set the returned value in order to access it.

So, what about Combine?

 

let myPublisher = Publishers.Zip3(photoSubject, stringSubject, voidSubject)

myPublisher.sink { (asset, string, _) in

// tasks executions are finished

// additionally, we can directly access the tasks (subjects) outputs

}

 

Note that photoSubject , stringSubject and voidSubject are predefined Subjects (which are basically Publishers). We just applied zipping for the upstream publishers by using Zip3, and that’s all! The beauty of it is not only about the amount of code that has been written, it’s also about the used paradigm for achieving such task.

Moreover, by the time you’ll work with publishers, you’ll find out that there are many useful operators to act on the received value(s) from publishers and republish them, its awesome!

What does it mean to me as a developer?

We can see that we’ll have a chance (and it might be the only required way in the future) to change the used approaches for developing our iOS apps by using these provided frameworks. Some of the developers are keen to learn it, some of them are getting bored of having to keep dealing with all this new stuff, and the rest are just like: “yeah whatever…”.

Although I won’t say that learning and working with Combine is a must (at least for now), whether we like it or not, you should keep in mind that it exists. Perhaps we’ll see in the near future a new generation of iOS developers who only know how to settle things with Combine and nothing else! Is it possible?!

Us, the old generation.

Furthermore, you might have heard of the theory of:

In order to apply an alternative architectural design pattern such as MVVM, I have to know the “reactive programming” thing.

It doesn’t have to be absolutely correct, but it makes sense! That’s because applying a pattern such as MVVM appropriately requires two-way data binding, which is painful without following the reactive programming approach to achieve it.

By the way, am I allowed to use Combine without using SwiftUI?

Yes, you are not limited to use Combine only with SwiftUI. You can use Combine framework with UIKit, which means that you are able to build the UI part of your application as you usually do meanwhile leveraging Combine. Rejoice!

Additionally, the good news to mention is that there is a compatibility between the Foundation framework and Combine framework. Citing from Combine documentation:

Several Foundation types expose their functionality through publishers, including Timer, NotificationCenter, and URLSession. Combine also provides a built-in publisher for any property that’s compliant with Key-Value Observing.

You can combine the output of multiple publishers and coordinate their interaction. For example, you can subscribe to updates from a text field’s publisher, and use the text to perform URL requests. You can then use another publisher to process the responses and use them to update your app.

This means that it will make our lives easier when it comes to dealing with some of the Foundation types with Combine. I personally hope that we’ll see more of it in the future.

And finally:

As I mentioned before in the SwiftUI article, a declarative paradigm is already popular. Even if it’s new to you, working with it will not only be reflected on your iOS development knowledge, but it will definitely expand your general programming skills, as well as your problem solving skills. When it comes to iOS development, the beauty of it is that we have more dynamic facilities to achieve the desired patterns and approaches, it’s more like “learn it once, apply it anywhere”.

Thanks for reading!

Written by: Ahmad Fayyas

Bio: Full-time iOS developer. If I’m not in front of my PC coding or playing video games, you can find me hanging out with friends, lifting some weights or sleeping!

*Opinion disclaimer: Please note that the views, thoughts, and opinions expressed in the article above belong solely to the author, and not necessarily to Aspire as an organization.

]]>