RTMIS

The Kenya Rural Urban Sanitation and Hygiene (RUSH) platform is a real-time monitoring and information system owned by the Ministry of Health. The platform aggregates quantitative and qualitative data from county and national levels and facilitates data analysis, report generation and visualizations.

Project Sheet

Build Status Repo Size Languages Issues Last Commit Coverage Status Coverage Status

Name
RTMIS (Real Time Monitoring Information Systems)
Project Scope
The government of Kenya needs a way to effectively monitor sanitation and hygiene facilities nationwide. Akvo is developing an integrated Real Time Management Information System (RTMIS) to collect, analyse and visualise all sanitation and hygiene data in both rural and urban Kenya, allowing the Ministry of Health to improve sanitation and hygiene for citizens nationwide. 
Contract Link

Project Dashboard Link
Start Date

End Date

Repository Link
https://github.com/akvo/rtmis
Tech Stack

List of technologies used to execute the technical scope of the project:

  • Front-end: JavaScript with React Framework
  • Back-end: Python with Django Framework
  • Testing: Django Test Framework, Jest
  • Coverage: Coveralls
  • Documentation: RTD, dbdocs
  • CI & CD: Semaphore
  • Hosting: GKE
  • Database: PostgreSQL, Cloud-SQL
  • Storage: Cloud Storage Buckets
Asana Link
https://app.asana.com/0/1204439932895582/overview
Slack Channel Link https://akvo.slack.com/archives/C04RMBFUR6F

Low Level Design

Introduction

About RUSH

The Kenya Rural Urban Sanitation and Hygiene (RUSH) platform is an advanced and comprehensive real-time monitoring and information system owned by the Ministry of Health in Kenya. This platform is designed to streamline and enhance the management of sanitation and hygiene data at both county and national levels.

One of the notable capabilities of the RUSH platform is its ability to handle large amounts of data efficiently. It supports Excel bulk upload, allowing users to upload data in bulk from Excel spreadsheets, which can significantly expedite the data entry process. Additionally, the platform features a web-form batch submission functionality, enabling users to submit multiple data entries through a user-friendly web-based interface.

To ensure data accuracy and reliability, the RUSH platform incorporates a data review and approval hierarchy between administrative levels. This means that data entered into the system undergoes a rigorous review process, where it is checked and approved by designated personnel at various administrative levels. This hierarchical approach ensures that data is thoroughly reviewed and validated before being utilised for analysis and decision-making.

Another significant aspect of the RUSH platform is its visualization capabilities. The platform follows the Joint Monitoring Program (JMP) standard and the RUSH (Rural Urban Sanitation) standard when presenting data visually. By adhering to these standards, the platform ensures consistency and comparability in data visualization across different geographical areas and time periods. The visualizations generated by the platform help in understanding trends, patterns, and gaps in sanitation and hygiene metrics, providing valuable insights for policymakers, stakeholders, and researchers.

The purpose of RUSH Platform

The purpose of the Kenya Rural Urban Sanitation and Hygiene (RUSH) platform is to support effective monitoring, management, and improvement of sanitation and hygiene practices in Kenya. It serves as a comprehensive information system owned by the Ministry of Health, aiming to address the challenges and gaps in sanitation and hygiene by providing reliable data, analysis, and visualization tools.

  1. Data Collection and Aggregation: The RUSH platform serves as a centralised repository for collecting and aggregating both quantitative and qualitative data related to sanitation and hygiene practices. It allows for data collection at the county and national levels, ensuring comprehensive coverage and representation of diverse geographical areas.
  2. Real-Time Monitoring: The platform operates in real-time, enabling timely monitoring of sanitation and hygiene indicators. This real-time monitoring helps identify emerging trends, gaps, and challenges, allowing for prompt intervention and decision-making.
  3. Data Analysis and Insights: The RUSH platform facilitates data analysis, allowing policymakers and stakeholders to gain valuable insights into the state of sanitation and hygiene practices across different regions and demographics. By analising the collected data, trends, patterns, and areas of improvement can be identified, contributing to evidence-based decision-making and targeted interventions.
  4. Reporting and Visualization: The platform enables the generation of reports and visualizations based on the collected data. The reports provide a comprehensive overview of the sanitation and hygiene situation, highlighting key indicators, challenges, and progress. The visualizations, following the JMP and RUSH standards, make complex data easily understandable, aiding in communication and knowledge dissemination.
  5. Decision Support: The RUSH platform acts as a decision support system, providing policymakers, health officials, and other stakeholders with the necessary information to formulate policies, design interventions, and allocate resources effectively. The data-driven insights and visualizations empower decision-makers to prioritize areas for improvement, target resources where they are most needed, and track progress over time.
  6. Collaboration and Accountability: The platform enhances collaboration between different administrative levels and stakeholders involved in sanitation and hygiene management. It establishes a data review and approval hierarchy, ensuring the accuracy and reliability of data. By promoting transparency and accountability, the platform facilitates coordinated efforts towards achieving national and international targets related to sanitation and hygiene.
  7. Continuous Improvement: The RUSH platform can be continually updated and enhanced to align with evolving needs and priorities. As new data sources, indicators, or best practices emerge, the platform can be adapted to incorporate these changes, ensuring that it remains a relevant and effective tool for monitoring and managing sanitation and hygiene in Kenya.

By leveraging technology and real-time data, the platform aims to contribute to better health outcomes, improved living conditions, and sustainable development in both rural and urban areas of the country.

Functional Overview

The Kenya Rural Urban Sanitation and Hygiene (RUSH) platform is a comprehensive real-time monitoring and information system owned by the Ministry of Health. It serves as a centralized platform for capturing, analising, and visualizing sanitation and hygiene data at the national, county, sub-county, and ward levels. The platform provides various functionalities to facilitate data collection, analysis, reporting, and visualization, empowering decision-makers with timely and accurate information.

The RUSH platform promotes collaboration and accountability by fostering engagement between different administrative levels and stakeholders involved in sanitation and hygiene management. It acts as a decision support system, providing policymakers and health officials with the necessary information to formulate policies, design interventions, and allocate resources effectively. Additionally, the platform encourages continuous improvement by being adaptable to changing needs and priorities, accommodating new data sources, indicators, and best practices.

To ensure data accuracy and reliability, the RUSH platform incorporates a robust data review and approval hierarchy between administrative levels. This hierarchical approach guarantees that data is thoroughly reviewed, validated, and approved by designated personnel, enhancing the credibility and quality of the information within the system.

the RUSH platform's functional overview highlights its role as a comprehensive system for data collection, analysis, reporting, and visualization.

Data Collection and Management

Approval Hierarchy

User Roles and Access Control

Visualisations and Reports

Design Considerations

The design of the RUSH platform incorporates several key considerations to ensure its effectiveness in addressing the challenges and requirements of managing sanitation and hygiene practices in Kenya. Some of the design considerations of the RUSH platform include:

  1. Data Aggregation and Integration: The RUSH platform is designed to aggregate both quantitative and qualitative data from various sources and administrative levels. It integrates data from county and national levels, allowing for comprehensive and unified data management. This design consideration enables a holistic view of sanitation and hygiene practices across different geographical areas.
  2. Real-time Monitoring and Reporting: The platform emphasise real-time monitoring of sanitation and hygiene indicators. It provides timely updates on data collection, analysis, and reporting, enabling prompt interventions and decision-making. This design consideration ensures that stakeholders have access to the most up-to-date information to address emerging challenges effectively.
  3. User-Friendly Interface: The RUSH platform features a user-friendly interface that enhances usability and accessibility. It is designed with intuitive navigation, clear visual cues, and streamlined workflows. This consideration enables users of varying technical backgrounds to easily navigate the platform and perform tasks efficiently.
  4. Role-Based Access and Permissions: The platform employs role-based access control, assigning different levels of access and permissions based on user roles and administrative levels. This design consideration ensures data security, privacy, and appropriate data management by allowing users to access only the functionalities and data relevant to their roles and responsibilities.
  5. Data Validation and Approval Hierarchy: The RUSH platform incorporates a data validation process and approval hierarchy to ensure data accuracy and reliability. Appropriate users at different administrative levels review, validate, and approve the data, maintaining data integrity throughout the platform.
  6. Standardized Visualizations: The platform follows standardized visualization practices, including the Joint Monitoring Programme (JMP) standard and the RUSH standard. This design consideration ensures consistency and comparability in data visualizations, allowing for meaningful insights and effective communication of information across different regions and time periods.
  7. Scalability and Adaptability: The design of the RUSH platform takes into account its scalability and adaptability. It is built to accommodate a growing volume of data and changing requirements over time. This consideration ensures that the platform can evolve and meet the changing needs of sanitation and hygiene management in Kenya.
  8. Integration of Existing Systems: The design of the RUSH platform takes into consideration the integration of existing systems and data sources. It aims to leverage and integrate with other relevant platforms, databases, and information systems to facilitate data exchange, interoperability, and collaboration.

These design considerations are aimed at creating a robust, user-friendly, and scalable platform that effectively supports data management, analysis, reporting, and decision-making for improved sanitation and hygiene practices in Kenya.

Architecture

Class Diagrams

Class Functions

User Roles

The RUSH platform offers a range of user roles, each with its own set of capabilities and responsibilities. The Super Admin holds the highest level of administrative authority at the national level and oversees the overall operation of the platform. County Admins have the responsibility of managing the platform within their respective counties, while Data Approvers review and approve data at the sub-county level. Data Entry Staff are responsible for collecting data at the ward level, ensuring that information is captured accurately at the grassroots level. Additionally, Institutional Users have access to view and download data from all counties, facilitating research and analysis.

These user roles, aligned with administrative levels, contribute to the effective management of sanitation and hygiene data. By assigning specific roles and access privileges, the RUSH platform ensures that data is collected, validated, and utilised appropriately. This promotes accountability, collaboration, and evidence-based decision-making, leading to improved sanitation and hygiene practices throughout Kenya.

In the following sections is the detailed descriptions of each user role, outlining their specific capabilities, page access, administration levels, and responsibilities. Understanding the functions and responsibilities of these user roles is vital to effectively utilising the RUSH platform and harnessing its full potential for transforming sanitation and hygiene practices in Kenya.

  1. Super Admin: The Super Admin holds the highest level of administrative authority in the RUSH platform at the national level. They have access to all functionalities and pages, including user management, data control, visualisation, questionnaires, approvals, and reports. As the overall national administrator, their responsibilities encompass assigning roles to County Admins, managing the organisation's settings, and overseeing the platform's operations. The Super Admin plays a crucial role in ensuring the smooth functioning and effective utilisation of the RUSH platform nationwide.
  2. County Admin: County Admins are responsible for overseeing the RUSH platform at the county level. They possess extensive access to functionalities and pages, including user management, data control, visualisation, questionnaires, approvals, and reports. Their primary role involves managing and coordinating the platform's operations within their respective counties. This includes assigning roles to Sub County RUSH Admins (Approvers) operating at the sub-county level, who play a crucial role in data management and approval. County Admins act as key facilitators in ensuring efficient and accurate data collection and analysis within their counties.
  3. Data Approver: Data Approvers hold the responsibility of giving final approval to the data submitted from their respective sub-counties. Operating at the sub-county administrative level, they possess access to functionalities and pages such as data control, visualisation, approvals, questionnaires, and reports. Data Approvers play a critical role in reviewing and validating data submitted by Data Entry Staff from their areas of jurisdiction. They have the authority to edit or return data for correction, ensuring data accuracy and reliability within their assigned sub-counties.
  4. Data Entry Staff: Data Entry Staff operate at the ward administrative level and are responsible for collecting data from the communities or villages assigned to them. They have access to functionalities and pages related to data entry, form submissions, data control, visualisation, and reports. Data Entry Staff play an essential role in gathering accurate and comprehensive data at the grassroots level, ensuring that the RUSH platform captures information directly from the targeted areas. Their diligent data collection efforts contribute to the overall effectiveness and reliability of the sanitation and hygiene data within the platform.
  5. Institutional User: Institutional Users have access to functionalities and pages such as profile management, visualisation, and reports. They can view and download data from all counties within the RUSH platform. Institutional Users do not possess administrative privileges but play a vital role in accessing and utilising the data for research, analysis, and decision-making purposes. Their ability to access data from multiple administrative levels ensures comprehensive insights and contributes to informed actions and interventions in the field of sanitation and hygiene.

Administrative Levels

The administrative levels within the RUSH platform are of utmost importance as they serve as a fundamental backbone for various components within the system. These administrative levels, provided by the Ministry of Health, play a crucial role in user management, data organisation, and the establishment of approval hierarchy rules. As such, this master list of administrative levels stands as a critical component that needs to be accurately provided by the Ministry of Health.

The administrative levels serve as a key reference for assigning roles and access privileges to users. Users are associated with specific administrative levels based on their responsibilities and jurisdiction. The administrative levels determine the data organisation structure, allowing for effective data aggregation, review, and approval processes. The approval hierarchy rules are established based on these administrative levels, ensuring proper authorisation and validation of submitted data. Additionally this allows for effective data visualisation, filtering, and analysis based on administrative boundaries. 

The administrative levels consist of distinct administrative names, level names, and unique identifiers, allowing for easy identification and filtering of data points within the platform.

  1. National: The National level represents the highest administrative level within the RUSH platform. It encompasses the entire country of Kenya and serves as the top-level jurisdiction for data management, coordination, and decision-making.
  2. County: The County level represents the second administrative level within the RUSH platform. It corresponds to the various counties in Kenya and acts as a primary jurisdiction for data collection, management, and implementation of sanitation and hygiene initiatives.
  3. Sub-County: The Sub-County level represents the third administrative level within the RUSH platform. It corresponds to the sub-county divisions within each county and serves as a localised jurisdiction for data collection, review, and approval processes.
  4. Ward: The Ward level represents the fourth administrative level within the RUSH platform. It corresponds to the wards or smaller subdivisions within each sub-county. Wards act as the grassroots level of data collection, ensuring that data is collected at the most localised and community-specific level.

Here's an explanation of the models and their relationships:

  1. Levels Model:

    • The Levels model represents the administrative levels within the RUSH platform.
    • Each instance of the Levels model corresponds to a specific administrative level, such as national, county, sub-county, or ward.
    • The model includes fields such as name and level.
    • The name field stores the name or label for the administrative level, as the explained administrative level above.
    • The level field stores the numerical representation of the administrative level, with lower values indicating higher levels of administration.
  2. Administration Model:

    • The Administration model represents administrative entities within the RUSH platform.
    • Each instance of the Administration model corresponds to a specific administrative entity, such as a county or sub-county.
    • The model includes fields such as parent, code, level, name, and path.
    • The parent field establishes a foreign key relationship with the Administration model itself, representing the parent administrative entity.
    • The code field stores a unique identifier or code for the administrative entity that comes from shapefile.
    • The level field establishes a foreign key relationship with the Levels model, indicating the administrative level associated with the entity.
    • The name field stores the name or label for the administrative entity.
    • The path field stores the hierarchical path or location of the administrative entity within the administrative structure.

Functionality:

Forms

Forms play a vital role in the RUSH platform, serving as a fundamental component for collecting data related to sanitation and hygiene practices. They are designed to capture specific information necessary for monitoring and evaluating sanitation initiatives at various administrative levels.

Importance of Forms:

  1. Data Collection: Forms are designed to capture relevant data regarding sanitation and hygiene practices. They ensure that standardised information is collected consistently across different administrative levels.

  2. Information Management: Forms enable the organised storage and retrieval of data related to sanitation and hygiene practices. The collected data can be accessed, analised, and visualised for informed decision-making and policy formulation.

  3. Monitoring and Evaluation: By collecting data through forms, the RUSH platform facilitates ongoing monitoring and evaluation of sanitation initiatives. This helps measure progress, identify challenges, and make data-driven decisions to improve sanitation and hygiene practices.

  4. Data Consistency and Standardisation: With questionnaire definitions and question attributes, forms ensure consistency and standardisation in data collection. This promotes reliable analysis and comparison of data across different regions and time periods.

  5. Approval Workflow: Forms incorporate approval rules and assignments, allowing designated administrators to review and approve data submitted through the platform. This ensures data quality and compliance with established standards.

  6. User Assignments: The platform assigns specific forms to individual users, enabling targeted data collection responsibilities. This streamlines the data collection process and ensures accountability.

  7. Integration with Other Components: Forms are integrated with other platform components such as question groups, question attributes, and options. This enhances the flexibility and customisation of data collection based on specific requirements.

Questions and Question Groups within Forms

Questions and question groups are essential components that contribute to the structured organisation and systematic data collection within forms. These components are interconnected and play a significant role in capturing information related to sanitation and hygiene practices.

  1. Forms Model

    • The Forms represents individual forms within the RUSH platform.
    • Each form has a unique name, version, uuid, and type ("County" or "National").
    • The model establishes relationships with other models to facilitate data approval, question grouping, and user assignments.
    • Forms serve as the container for questions and question groups, defining the overall structure and context for data collection.
    • Each form is associated with specific questions and question groups that collectively capture data for a particular purpose, such as county-level or national-level sanitation assessments.
  2. Question Groups Model

    • The Question Group represents a grouping mechanism for related questions within a form.
    • Question groups are an organisational unit within a form that groups together questions with a common theme or topic.
    • Each question group is associated with a specific form and has a unique name.
    • The order of question groups determines the sequence or presentation of these groups within the form.
  3. Questions Model

    • The Questions model represents individual questions within a form.
    • Questions are associated with a specific form and question group, defining their position and relationship within the form's structure.
    • Each question captures specific data points related to sanitation and hygiene practices.
    • Questions can have various types (e.g., administration (cascade),  text, number, option, multiple option, geo, date) and properties (e.g., required, rule, dependency, and api for cascade type of question).
    • The properties of questions are defined within the context of the question group and form they belong to.

Cascade type of question has different api call properties for each users depends on the access of the administrative of so users can only fill the form within their administrative area

Form Data

the Form Data and Answers models work together to capture, store, and associate form data and the corresponding answers within the RUSH platform.

  1. Form Data Model

    • When a user fills out a form in the RUSH platform, the entered data is captured and stored as form data.
    • The Form Data model represents a specific data entry for a form within the platform.
    • Each instance of the Form Data model corresponds to a unique submission of a form by a user.
    • The Form Data model includes information such as the form name, version, administration level, geographical data, and timestamps for creation and updates.
    • By storing form data, the RUSH platform maintains a record of each user's submission and enables the tracking of changes and updates over time.
    • The form data is associated with the relevant form through a foreign key relationship, allowing easy retrieval and analysis of the submitted information.
  2. Answers Model

    • Within each form data entry, the user provides answers to the questions included in the form.
    • The Answers model represents individual answers for specific questions within a form data entry.
    • Each answer in the Answers model is associated with a particular question and the corresponding form data entry.
    • The model includes fields such as the answer value, name, options (if applicable), and timestamps for creation and updates.
    • By storing answers as separate instances, the RUSH platform retains the granularity of data, allowing for detailed analysis of each answer within the form data.
    • The answers are linked to the form data and questions through foreign key relationships, facilitating easy retrieval and analysis of specific answers within a given form data entry.

Functionality:

Class Overview

Class Name
Class Notes
Organisation Organisation(id, name)
OrganisationAttribute OrganisationAttribute(id, organisation, type)
SystemUser SystemUser(id, password, last_login, is_superuser, email, date_joined, first_name, last_name, phone_number, designation, trained, updated, deleted_at, organisation)
Levels Levels(id, name, level)
Administration Administration(id, parent, code, level, name, path)
Access Access(id, user, administration, role)
Forms Forms(id, name, version, uuid, type)
FormApprovalRule FormApprovalRule(id, form, administration)
FormApprovalAssignment FormApprovalAssignment(id, form, administration, user, updated)
QuestionGroup QuestionGroup(id, form, name, order)
Questions Questions(id, form, question_group, order, text, name, type, meta, required, rule, dependency, api, extra)
QuestionOptions QuestionOptions(id, question, order, code, name, other)
UserForms UserForms(id, user, form)
QuestionAttribute QuestionAttribute(id, name, question, attribute, options)
ViewJMPCriteria ViewJMPCriteria(id, form, name, criteria, level, score)
FormData FormData(id, name, form, administration, geo, created_by, updated_by, created, updated)
PendingDataBatch PendingDataBatch(id, form, administration, user, name, uuid, file, approved, created, updated)
PendingDataBatchComments PendingDataBatchComments(id, batch, user, comment, created)
PendingFormData PendingFormData(id, name, form, data, administration, geo, batch, created_by, updated_by, created, updated)
PendingDataApproval PendingDataApproval(id, batch, user, level, status)
PendingAnswers PendingAnswers(id, pending_data, question, name, value, options, created_by, created, updated)
PendingAnswerHistory PendingAnswerHistory(id, pending_data, question, name, value, options, created_by, created, updated)
Answers Answers(id, data, question, name, value, options, created_by, created, updated)
AnswerHistory AnswerHistory(id, data, question, name, value, options, created_by, created, updated)
ViewPendingDataApproval ViewPendingDataApproval(id, status, user, level, batch, pending_level)
ViewDataOptions ViewDataOptions(id, data, administration, form, options)
ViewOptions ViewOptions(id, data, administration, question, answer, form, options)
ViewJMPData ViewJMPData(id, data, path, form, name, level, matches, score)
ViewJMPCount ViewJMPCount(id, path, form, name, level, total)
Jobs Jobs(id, task_id, type, status, attempt, result, info, user, created, available)
DataCategory DataCategory(id, name, data, form, options)
Task Task(id, name, func, hook, args, kwargs, result, group, started, stopped, success, attempt_count)
Success Success(id, name, func, hook, args, kwargs, result, group, started, stopped, success, attempt_count)
Failure Failure(id, name, func, hook, args, kwargs, result, group, started, stopped, success, attempt_count)
Schedule Schedule(id, name, func, hook, args, kwargs, schedule_type, minutes, repeats, next_run, cron, task, cluster)
OrmQ OrmQ(id, key, payload, lock)

Database Overview

Main Tables

access

pos table column null dtype len default
1 access id NO bigint   access_id_seq
2 access role NO int    
3 access administration_id NO bigint    
4 access user_id NO bigint    

administrator

pos table column null dtype len default
1 administrator id NO bigint   administrator_id_seq
2 administrator code YES character varying 255  
3 administrator name NO text    
4 administrator level_id NO bigint    
5 administrator parent_id YES bigint    
6 administrator path YES text    

answer

pos table column null dtype len default
1 answer id NO bigint   answer_id_seq
2 answer name YES text    
3 answer value YES double    
4 answer options YES jsonb    
5 answer created NO tz timestamp    
6 answer updated YES tz timestamp    
7 answer created_by_id NO bigint    
8 answer data_id NO bigint    
9 answer question_id NO bigint    

answer_history

pos table column null dtype len default
1 answer_history id NO bigint   answer_history_id_seq
2 answer_history name YES text    
3 answer_history value YES double    
4 answer_history options YES jsonb    
5 answer_history created NO tz timestamp    
6 answer_history updated YES tz timestamp    
7 answer_history created_by_id NO bigint    
8 answer_history data_id NO bigint    
9 answer_history question_id NO bigint    

batch

pos table column null dtype len default
1 batch id NO bigint   batch_id_seq
2 batch name NO text    
3 batch uuid YES uuid    
4 batch file YES character varying 200  
5 batch created NO tz timestamp    
6 batch updated YES tz timestamp    
7 batch administration_id NO bigint    
8 batch form_id NO bigint    
9 batch user_id NO bigint    
10 batch approved NO bool    

batch_comment

pos table column null dtype len default
1 batch_comment id NO bigint   batch_comment_id_seq
2 batch_comment comment NO text    
3 batch_comment created NO tz timestamp    
4 batch_comment batch_id NO bigint    
5 batch_comment user_id NO bigint    

data

pos table column null dtype len default
1 data id NO bigint   data_id_seq
2 data name NO text    
3 data geo YES jsonb    
4 data created NO tz timestamp    
5 data updated YES tz timestamp    
6 data administration_id NO bigint    
7 data created_by_id NO bigint    
8 data form_id NO bigint    
9 data updated_by_id YES bigint    

form

pos table column null dtype len default
1 form id NO bigint   form_id_seq
2 form name NO text    
3 form version NO int    
4 form uuid NO uuid    
5 form type YES int    

form_approval_assignment

pos table column null dtype len default
1 form_approval_assignment id NO bigint   form_approval_assignment_id_seq
2 form_approval_assignment updated YES tz timestamp    
3 form_approval_assignment administration_id NO bigint    
4 form_approval_assignment form_id NO bigint    
5 form_approval_assignment user_id NO bigint    

form_approval_rule

pos table column null dtype len default
1 form_approval_rule id NO bigint   form_approval_rule_id_seq
2 form_approval_rule administration_id NO bigint    
3 form_approval_rule form_id NO bigint    

jobs

pos table column null dtype len default
1 jobs id NO bigint   jobs_id_seq
2 jobs type NO int    
3 jobs status NO int    
4 jobs attempt NO int    
5 jobs result YES text    
6 jobs info YES jsonb    
7 jobs created NO tz timestamp    
8 jobs available YES tz timestamp    
9 jobs user_id NO bigint    
10 jobs task_id YES character varying 50  

levels

pos table column null dtype len default
1 levels id NO bigint   levels_id_seq
2 levels name NO character varying 50  
3 levels level NO int    

option

pos table column null dtype len default
1 option id NO bigint   option_id_seq
2 option order YES bigint    
3 option code YES character varying 255  
4 option name NO text    
5 option other NO bool    
6 option question_id NO bigint    

organisation

pos table column null dtype len default
1 organisation id NO bigint   organisation_id_seq
2 organisation name NO character varying 255  

organisation_attribute

pos table column null dtype len default
1 organisation_attribute id NO bigint   organisation_attribute_id_seq
2 organisation_attribute type NO int    
3 organisation_attribute organisation_id NO bigint    

pending_answer

pos table column null dtype len default
1 pending_answer id NO bigint   pending_answer_id_seq
2 pending_answer name YES text    
3 pending_answer value YES double    
4 pending_answer options YES jsonb    
5 pending_answer created NO tz timestamp    
6 pending_answer updated YES tz timestamp    
7 pending_answer created_by_id NO bigint    
8 pending_answer pending_data_id NO bigint    
9 pending_answer question_id NO bigint    

pending_answer_history

pos table column null dtype len default
1 pending_answer_history id NO bigint   pending_answer_history_id_seq
2 pending_answer_history name YES text    
3 pending_answer_history value YES double    
4 pending_answer_history options YES jsonb    
5 pending_answer_history created NO tz timestamp    
6 pending_answer_history updated YES tz timestamp    
7 pending_answer_history created_by_id NO bigint    
8 pending_answer_history pending_data_id NO bigint    
9 pending_answer_history question_id NO bigint    

pending_data

pos table column null dtype len default
1 pending_data id NO bigint   pending_data_id_seq
2 pending_data name NO text    
3 pending_data geo YES jsonb    
5 pending_data created NO tz timestamp    
6 pending_data administration_id NO bigint    
7 pending_data created_by_id NO bigint    
8 pending_data data_id YES bigint    
9 pending_data form_id NO bigint    
11 pending_data batch_id YES bigint    
12 pending_data updated YES tz timestamp    
13 pending_data updated_by_id YES bigint    

pending_data_approval

pos table column null dtype len default
1 pending_data_approval id NO bigint   pending_data_approval_id_seq
2 pending_data_approval status NO int    
4 pending_data_approval user_id NO bigint    
5 pending_data_approval level_id NO bigint    
6 pending_data_approval batch_id NO bigint    

question

pos table column null dtype len default
1 question id NO bigint   question_id_seq
2 question order YES bigint    
3 question text NO text    
4 question name NO character varying 255  
5 question type NO int    
6 question meta NO bool    
7 question required NO bool    
8 question rule YES jsonb    
9 question dependency YES jsonb    
10 question form_id NO bigint    
11 question question_group_id NO bigint    
12 question api YES jsonb    
13 question extra YES jsonb    

question_attribute

pos table column null dtype len default
1 question_attribute id NO bigint   question_attribute_id_seq
2 question_attribute name YES text    
3 question_attribute attribute NO int    
4 question_attribute options YES jsonb    
5 question_attribute question_id NO bigint    

question_group

pos table column null dtype len default
1 question_group id NO bigint   question_group_id_seq
2 question_group name NO text    
3 question_group form_id NO bigint    
4 question_group order YES bigint    

system_user

pos table column null dtype len default
1 system_user id NO bigint   system_user_id_seq
2 system_user password NO character varying 128  
3 system_user last_login YES tz timestamp    
4 system_user is_superuser NO bool    
5 system_user email NO character varying 254  
6 system_user date_joined NO tz timestamp    
7 system_user first_name NO character varying 50  
8 system_user last_name NO character varying 50  
9 system_user designation YES character varying 50  
10 system_user phone_number YES character varying 15  
11 system_user updated YES tz timestamp    
12 system_user deleted_at YES tz timestamp    
13 system_user organisation_id YES bigint    
14 system_user trained NO bool    

user_form

pos table column null dtype len default
1 user_form id NO bigint   user_form_id_seq
2 user_form form_id NO bigint    
3 user_form user_id NO bigint    

Materialized Views

Relationship Diagrams

rtmis-main.png

To generate the relationship diagram for the RUSH platform, the dbdocs.io tool is utilized. The process involves using the django-dbml library to generate a dbml (database markup language) file that represents the database schema and entity relationships based on the Django models.

This dbml file is then pushed to a designated location, accessible during the CI/CD pipeline. The dbdocs.io command-line tool is utilized to build the documentation using the dbml file. The process typically includes specifying the location of the dbml file and providing a project name, which may be customized based on the CI/CD environment or branch. Once the documentation is built, the resulting relationship diagram can be accessed via the generated dbdocs.io link, which provides a visual representation of the database schema and the relationships between entities within the RUSH platform.

# Generate DBML
# https://github.com/akvo/rtmis/blob/main/backend/run-qc.sh#L22
python manage.py dbml > db.dbml

# Push DBDocs
# https://github.com/akvo/rtmis/blob/main/ci/build.sh#L116-L122
update_dbdocs() {
    if [[ "${CI_BRANCH}" ==  "main" || "${CI_BRANCH}" ==  "develop" ]]; then
        npm install -g dbdocs
        # dbdocs build doc/dbml/schema.dbml --project rtmis
        dbdocs build backend/db.dbml --project "rtmis-$CI_BRANCH"
    fi
}

To view the comprehensive relationship diagram for the RUSH platform, please refer to the following link: RUSH Platform Relationship Diagram.

Sequence Diagrams

Data Flow Diagrams

rtmis-data-flow.png

User Interface Design

The RUSH platform incorporates a range of user interfaces designed to enhance usability, streamline workflows, and enable efficient data management and analysis. These interfaces serve as the gateway for users to interact with the platform's various features and functionalities. From the login page that grants access to authenticated users, to the dashboard providing an informative overview of key data and notifications, each interface has a specific purpose and contributes to the seamless operation of the platform.

These user interfaces collectively offer a comprehensive and intuitive user experience, facilitating efficient data entry, analysis, visualization, approval workflows, and decision-making within the RUSH platform.

For a detailed visual representation of the user interfaces within the RUSH platform, please refer to the design interface available at the following link: RUSH Platform Design Interface.

This interface showcases the overall layout, design elements, and interactions that users can expect when navigating through the platform. It provides a valuable reference for understanding the visual aesthetics, information architecture, and user flow incorporated into the RUSH platform's user interfaces. By exploring the design interface, stakeholders can gain a clearer understanding of the platform's look and feel, facilitating better collaboration and alignment throughout the development process.

Error Handling

Error Handling Rules

The platform incorporates robust error handling strategies to address various types of errors that may occur during operation. Here are the key considerations for error handling in the RUSH platform:

  1. Error Logging and Monitoring: The platform logs errors and exceptions that occur during runtime. These logs capture relevant details such as the error type, timestamp, user context, and relevant system information. Error logs enable developers and administrators to identify and troubleshoot issues efficiently, helping to improve system reliability and performance.
  2. User-Friendly Error Messages: When errors occur, the platform provides user-friendly error messages that communicate the issue clearly and concisely. Clear error messages help users understand the problem and take appropriate actions or seek assistance. The messages may include relevant details about the error, potential solutions, and contact information for support if necessary.
  3. Graceful Degradation and Recovery: The platform is designed to handle errors gracefully, minimising disruptions and providing fallback mechanisms where possible. For example, if a specific functionality or service becomes temporarily unavailable, the platform can display a fallback message or provide alternative options to ensure users can continue their work or access relevant information.
  4. Error Validation and Input Sanitisation: The platform applies comprehensive input validation and sanitisation techniques to prevent and handle errors caused by invalid or malicious user input. This includes validating user-submitted data, sanitising inputs to prevent code injection or script attacks, and ensuring that data conforms to expected formats and ranges. Proper input validation reduces the risk of errors and security vulnerabilities.
  5. Exception Handling and Error Recovery: The platform utilises exception handling mechanisms to catch and handle errors gracefully. Exceptions are caught, logged, and processed to prevent system crashes or unexpected behavior. The platform incorporates appropriate error recovery strategies, such as rolling back transactions or reverting to previous states, to maintain data integrity and prevent data loss or corruption.
  6. Error Reporting and Support Channels: The platform provides channels for users to report errors and seek support. These channels can include contact forms, dedicated support email addresses, or a help-desk system. By offering reliable channels for error reporting and support, users can report issues promptly and receive assistance in resolving them effectively.
  7. Continuous Improvement: The platform regularly assesses error patterns and user feedback to identify recurring issues and areas for improvement. By analising error trends, the development team can prioritise bug fixes, optimise system components, and enhance the overall stability and reliability of the platform.

List Errors

The following section provides an overview of potential errors that may occur within the RUSH platform. While data validation plays a significant role in minimizing errors during data entry and form submissions, certain issues can still arise in other aspects of the platform's functionality. These errors encompass various areas, including authentication, authorization, file uploads, data synchronization, network connectivity, server timeouts, data import/export, data corruption, missing data, report generation, visualization, server overload, email notifications, and third-party integrations. By being aware of these potential errors, the development team can proactively address and implement proper error handling mechanisms to ensure smooth operations, enhance user experience, and maintain data integrity throughout the platform.

  1. Database Connection Error: Failure to establish a connection with the database server, resulting in the inability to retrieve or store data.

  2. Authentication Error: Users may encounter authentication errors when attempting to log in, indicating invalid credentials or authentication failures.

  3. Authorisation Error: Users may encounter authorisation errors when accessing certain features or performing actions for which they do not have sufficient privileges.

  4. File Upload Error: When uploading files, errors may occur due to file format compatibility, size limitations, or network connectivity issues.

  5. Data Synchronisation Error: In a multi-user environment, conflicts may arise when multiple users attempt to update the same data simultaneously, leading to synchronisation errors.

  6. Network Connectivity Error: Users may experience network connectivity issues, preventing them from accessing the platform or transmitting data.

  7. Server Timeout Error: When processing resource-intensive tasks, such as generating complex reports or visualizations, server timeouts may occur if the process exceeds the maximum allowed execution time.

  8. Data Import/Export Error: Errors may occur during the import or export of data, resulting in data loss, formatting issues, or mismatches between source and destination formats.

  9. Data Corruption Error: In rare cases, data corruption may occur, leading to inconsistencies or incorrect values in the database.

  10. Missing Data Error: Users may encounter missing data issues when attempting to retrieve or access specific records or fields that have not been properly captured or stored.

  11. Report Generation Error: Errors may occur during the generation of reports, resulting in incomplete or inaccurate data representation or formatting issues.

  12. Visualization Error: Issues with data visualization components, such as charts or graphs, may lead to incorrect data representation or inconsistencies in visual outputs.

  13. Server Overload Error: During periods of high user activity or resource-intensive tasks, the server may become overloaded, causing slowdowns or system instability.

  14. Email Notification Error: Failure to send email notifications, such as approval requests or password reset emails, may occur due to issues with the email service or configuration.

  15. Third-Party Integration Error: Errors may arise when integrating with external services or APIs, resulting in data transfer issues or functionality limitations.

These errors represent potential issues that may arise in the RUSH platform, excluding errors already addressed by data validation measures. It's crucial to implement proper error handling and logging mechanisms to promptly identify, track, and resolve these errors, ensuring the smooth functioning of the platform.

Security Considerations

The RUSH platform incorporates multiple security measures to safeguard data, protect user privacy, and ensure secure operations across its Docker containers and cloud-based infrastructure. Here are the key security considerations in the platform:

  1. Container Security (Docker): The Docker containers, including the Back-end and Worker containers, are designed with security in mind. The containers are configured to follow best practices such as using official base images, regularly updating dependencies, and employing secure container runtime configurations. These measures reduce the risk of vulnerabilities and unauthorised access within the containerised environment.
  2. Access Control and Authentication: The platform implements robust access control mechanisms to ensure that only authorised users can access the system and its functionalities. User authentication, such as through the use of JWT (JSON Web Token), is employed to verify user identities and grant appropriate access based on roles and permissions. This helps prevent unauthorised access to sensitive data and functionalities.
  3. Network Security (NGINX): The Front-end container, powered by NGINX, helps enforce security measures at the network level. NGINX can be configured to handle SSL/TLS encryption, protecting data in transit between users and the platform. It can also serve as a reverse proxy, effectively managing incoming traffic and providing an additional layer of security to prevent potential attacks.
  4. Secure Database Storage (Cloud-SQL): The RUSH platform utilises Cloud-SQL for secure database storage. Cloud-SQL offers built-in security features, including encryption at rest and transit, role-based access control, and regular security updates. These measures help protect the integrity and confidentiality of the platform's data stored in the Cloud-SQL database.
  5. Secure File Storage (Cloud Storage Bucket): The platform leverages Cloud Storage Bucket for secure file storage. Cloud Storage provides robust access controls, including fine-grained permissions, encryption, and auditing capabilities. This ensures that data files, such as uploaded documents, are securely stored and protected from unauthorised access. The endpoints of file should only served by the back-end so it also applies authentication.
  6. Security Monitoring and Auditing: The platform implements security monitoring and auditing tools to detect and respond to potential incidents. System logs and activity records are regularly reviewed to identify any suspicious activities or breaches. Additionally, periodic security audits are conducted to assess and address potential vulnerabilities proactively.
  7. User Education and Awareness: The platform emphasise user education and awareness regarding security best practices. Users are encouraged to follow strong password policies: Lowercase, Numbers, Special Character, Uppercase Character, No White Space, and Minimum 8 Characters. By promoting user security awareness, the platform strengthens overall security posture.

Performance Considerations

The RUSH platform has several performance considerations, particularly in relation to visualisation, excel data download, data upload, and validation. While these functionalities are crucial for effective data management and analysis, they can pose potential performance challenges due to the volume and complexity of the data involved. The platform takes these considerations into account to optimise performance and ensure a smooth user experience. Here are the key performance considerations:

  1. Visualisation: Visualisations are powerful tools for data analysis and communication. However, generating complex visualisations from large datasets can be computationally intensive and may lead to performance issues. The RUSH platform employs optimisation techniques, such as efficient data retrieval, caching, and rendering algorithms, to enhance the speed and responsiveness of visualisations. It strives to strike a balance between visual richness and performance to provide users with meaningful insights without sacrificing usability.
  2. Excel Data Download: The ability to download data in Excel format is essential for users to perform in-depth analysis and reporting. However, large datasets or complex queries can result in long download times and increased server load. To mitigate this, the RUSH platform optimises the data retrieval and export process, employing techniques such as data compression and efficient file generation. It aims to minimise download times and ensure a seamless user experience when exporting data to Excel.
  3. Data Upload and Validation: Data upload and validation involve processing and verifying large volumes of data. This process can be time-consuming, particularly when dealing with extensive datasets or complex validation rules. The RUSH platform optimises data upload and validation processes through efficient algorithms and parallel processing techniques. It strives to expedite the data entry process while maintaining data integrity and accuracy.

To ensure optimal performance, the RUSH platform continuously monitors system performance, identifies bottlenecks, and implements performance optimisations as needed. This may involve infrastructure scaling, database optimisations, query optimisations, and caching strategies. Regular maintenance and updates are conducted to keep the platform running smoothly and efficiently.

It is worth noting that the platform's performance can also be influenced by factors such as network connectivity, hardware capabilities, and user behavior. To mitigate these factors, the RUSH platform provides guidelines and best practices for users to optimise their own data handling processes and network connectivity.

Deployment Strategy

The RUSH platform follows a deployment strategy that leverages the capabilities of the Google Cloud Platform (GCP) to ensure efficient and reliable deployment of the application. The deployment strategy includes the use of Google Kubernetes Engine (GKE) to manage containers, the storage of container images in the Container Registry with git hash suffixes, the utilisation of ingress and load balancers for routing traffic, Cloud DNS for domain management, and IAM key management services for secure access to CloudSQL using gcloud proxy. Here's an explanation of each component of the deployment strategy:

  1. Google Kubernetes Engine (GKE):

    • GKE is utilised as the container orchestration platform for deploying and managing the RUSH platform's containers.
    • The application is deployed in two clusters: the test cluster and the production cluster.
    • The test cluster receives updates from the main branch, allowing for continuous integration and testing of new features and code changes.
    • The production cluster receives tagged releases, ensuring stability and reliability for the live environment.
  2. Container Registry:

    • Container images of the RUSH platform are stored in the Google Container Registry.
    • Each container image is suffixed with a git hash, providing a unique identifier for version control and traceability.
    • This approach allows for efficient image management, rollbacks, and reproducible deployments.
  3. Ingress, Load Balancers, and Cloud DNS:

    • Ingress and load balancers are utilised to route and distribute traffic to the RUSH platform's services within the GKE clusters.
    • Ingress acts as the entry point, directing requests to the appropriate services based on defined rules.
    • Load balancers ensure high availability and scalability by distributing traffic across multiple instances of the platform.
    • Cloud DNS is used for domain management, mapping domain names to the respective IP addresses of the deployed services.
  4. CloudSQL and IAM Key Management Services:

    • The RUSH platform accesses CloudSQL, the managed relational database service on GCP, for data storage and retrieval.
    • IAM key management services are utilised to securely connect to CloudSQL using the gcloud proxy.
    • This approach ensures secure and controlled access to the database, limiting exposure of sensitive information.

rtmis-deployment.png

By utilising GCP services such as GKE, Container Registry, ingress, load balancers, Cloud DNS, CloudSQL, and IAM key management services, the RUSH platform benefits from a robust and scalable deployment strategy. It enables efficient management of containers, version control of images, routing and distribution of traffic, secure access to the database, and reliable domain management. This deployment strategy ensures a stable and performant environment for running the RUSH platform, facilitating seamless user access and interaction.

To view the example deployment script for the RUSH platform, please refer to the following link: RUSH Platform CI/CD.

Testing Strategy

Testing Framework and Tools

The RUSH platform employs a comprehensive testing strategy to ensure the reliability, functionality, and quality of both its back-end and front-end components. The testing strategy encompasses different levels of testing, including back-end testing with Django Test, front-end testing with Jest, and container network testing with HTTP (bash). Here is an overview of the testing strategy for the RUSH platform:

Back-end Testing with Django Test

Front-end Testing with Jest

Container Network Testing with HTTP (bash) WILL BE REPLACED BY SELENIUM-HQ:

The testing strategy for the RUSH platform aims to achieve thorough coverage across the back-end, front-end, and container network aspects. It focuses on validating the functionality, data flow, interactions, and network connectivity within the platform. Test cases are designed to cover a wide range of scenarios, including normal operation, edge cases, and potential error conditions.

Hardware Capability Evaluation

In addition to the testing strategies mentioned earlier, the RUSH platform recognise the importance of stress testing to evaluate the hardware capability and performance under heavy workloads. This specifically applies to resource-intensive tasks such as data validation and data seeding from the Excel bulk data upload feature. Stress testing is conducted to simulate high-demand scenarios and identify potential bottlenecks or performance issues. Here's an explanation of the stress testing approach:

Stress Testing

  • Stress testing involves subjecting the RUSH platform to simulated high-volume and high-concurrency scenarios to evaluate its performance and robustness under heavy workloads.
  • During stress testing, the platform is tested with large datasets or concurrent user loads that closely represent real-world usage scenarios.
  • The focus is on measuring the response time, throughput, and resource utilisation to identify any performance degradation, scalability issues, or resource limitations.

Data Validation Stress Test

  • A stress test specifically targeting the data validation process is conducted to assess how the platform performs when validating large volumes of data from the Excel bulk data upload feature.
  • The stress test involves simulating multiple concurrent data uploads, each containing a significant amount of data that requires validation.
  • The test measures the time taken to process and validate the data, ensuring that the platform maintains acceptable performance levels and does not become overwhelmed by the workload.

Data Seeding Stress Test

  • A stress test focusing on the data seeding process is conducted to evaluate the platform's capability to handle heavy data seeding operations resulting from the Excel bulk data upload feature.
  • The stress test involves simulating a high number of concurrent data seeding requests, each involving a large dataset to be inserted into the database.
  • The test measures the time taken to seed the data, ensuring that the platform can handle the load without compromising performance or causing data integrity issues.

The stress testing process aims to identify any performance bottlenecks, resource limitations, or scalability issues that may arise when the platform is subjected to heavy workloads. By conducting stress tests, the development team can gather valuable insights and make necessary optimisations to ensure that the platform can handle the expected load and perform optimally under stressful conditions.

The stress testing phase is important to validate the hardware capability and scalability of the RUSH platform, particularly during resource-intensive tasks like data validation and data seeding from the Excel bulk data upload feature.

Assumptions and Constraints

The development and operation of the RUSH platform are subject to certain assumptions and constraints that influence its design and functionality. These assumptions and constraints are important to consider as they provide context and boundaries for the platform's implementation. Here are the key assumptions and constraints of the RUSH platform:

Technical Infrastructure: The RUSH platform assumes access to a reliable technical infrastructure, including servers, networking components, and cloud-based services. It requires sufficient computational resources, storage capacity, and network connectivity to handle the expected user load and data processing requirements.

  1. Data Availability and Quality: The platform assumes the availability and quality of data from various sources, including county and national levels. It relies on the assumption that relevant data is collected, validated, and provided by the respective stakeholders. The accuracy, completeness, and timeliness of the data are crucial for effective analysis and decision-making within the platform.
  2. Compliance with Regulatory Requirements: The RUSH platform operates under the assumption that it complies with applicable laws, regulations, and data privacy requirements. It is assumed that necessary consent, data usage, and privacy policies are in place to protect user data and comply with legal obligations.
  3. User Adoption and Engagement: The platform assumes user adoption and engagement, as its success relies on active participation and utilisation by relevant stakeholders. It assumes that users, including data entry staff, data approvers, administrators, and institutional users, will actively use the platform, contribute accurate data, and engage in data analysis and decision-making processes.
  4. System Scalability and Performance: The RUSH platform assumes that it can scale and perform adequately to handle increasing user demand and growing data volumes over time. It assumes that the necessary infrastructure and optimisations can be implemented to maintain system performance, responsiveness, and reliability as the user base and data size expand.
  5. Collaboration and Data Sharing: The platform assumes a collaborative environment and willingness among stakeholders to share data and insights. It assumes that relevant agencies, organisations, and institutions are willing to collaborate, contribute data, and use the platform's functionalities for informed decision-making and improved sanitation and hygiene practices.
  6. Resource Constraints: The development and maintenance of the RUSH platform operate within resource constraints, such as budgetary limitations, time constraints, and availability of skilled personnel. These constraints may impact the scope, timeline, and features of the platform's implementation and ongoing operations.

Dependencies

Software Dependencies

The RUSH platform incorporates various dependencies and frameworks to enable its functionality and deliver a seamless user experience. The following dependencies are essential components used in the development of the platform:

  1. Django: The RUSH platform utilises Django, a high-level Python web framework, to build the back-end infrastructure. Django provides a solid foundation for handling data management, authentication, and implementing business logic.
  2. Pandas: The platform relies on Pandas, a powerful data manipulation and analysis library in Python, to handle data processing tasks efficiently. Pandas enables tasks such as data filtering, transformation, and aggregation, enhancing the platform's data management capabilities.
  3. React: The front-end of the RUSH platform is developed using React, a popular JavaScript library for building user interfaces. React enables the creation of dynamic and interactive UI components, ensuring a responsive and engaging user experience.
  4. Ant Design (antd): The platform utilises Ant Design, a comprehensive UI library based on React, to design and implement a consistent and visually appealing user interface. Ant Design provides a rich set of customisable and reusable UI components, streamlining the development process.
  5. Echarts: Echarts, a powerful charting library, is integrated into the RUSH platform to generate various data visualisations. With Echarts, the platform can display charts, graphs, and other visual representations of data, enabling users to gain insights and make informed decisions.
  6. D3: The RUSH platform incorporates D3.js, a JavaScript library for data visualisation. D3.js provides a powerful set of tools for creating interactive and customisable data visualisations, including charts, graphs, and other visual representations. By leveraging D3.js, the platform can deliver dynamic and engaging data visualisations to users.

  7. Leaflet: The platform incorporates Leaflet, a JavaScript library for interactive maps, to handle geo-spatial data visualisation. Leaflet enables the integration of maps, markers, and other geo-spatial features, enhancing the platform's ability to represent and analise location-based information.
  8. Node-sass: Node-sass is a Node.js library that enables the compilation of Sass (Syntactically Awesome Style Sheets) files into CSS. The RUSH platform uses node-sass to process and compile Sass files, allowing for a more efficient and maintainable approach to styling the user interface.

In addition to the previously mentioned dependencies, the RUSH platform relies on the following essential dependencies and libraries to support its functionality and development process:

  1. Django Rest Framework (DRF): The RUSH platform utilises Django Rest Framework, a powerful and flexible toolkit for building Web APIs. DRF simplifies the development of APIs within the platform, providing features such as request/response handling, authentication, serialisation, and validation. It enables seamless integration of RESTful API endpoints, allowing for efficient communication between the frontend and backend components.
  2. PyJWT: PyJWT is a Python library that enables the implementation of JSON Web Tokens (JWT) for secure user authentication and authorisation. The RUSH platform utilises PyJWT to generate, validate, and manage JWT tokens. JWT tokens play a crucial role in ensuring secure user sessions, granting authorised access to specific functionalities and data within the platform.
  3. Sphinx: Sphinx is a documentation generation tool widely used in Python projects. The RUSH platform incorporates Sphinx to generate comprehensive and user-friendly documentation. Sphinx facilitates the creation of structured documentation, including API references, code examples, and user guides. It streamlines the documentation process, making it easier for developers and users to understand and utilise the platform's features and functionalities.

By leveraging these additional dependencies, including Django Rest Framework, PyJWT, and Sphinx, the RUSH platform gains essential support for building robust APIs, implementing secure authentication mechanisms, and generating comprehensive documentation.


These dependencies contribute to the platform's overall functionality, security, and user-friendliness, ensuring a well-rounded and effective solution for managing sanitation and hygiene practices in Kenya.

Master Lists

The RUSH platform incorporates several master lists that play a vital role in its functioning and data management. These master lists include the administrative levels, questionnaire definitions, and the shape-file representing accurate administrative boundaries. The administrative levels master list defines the hierarchical structure of Kenya's administrative divisions, facilitating data organisation, user roles, and reporting.

Shape-file and Country Administrative Description

An essential master list in the RUSH platform is the shape-file that accurately represents the administrative levels of Kenya. This shape-file serves as a crucial reference for various components within the system, including user management, data management, and visualisation. The importance of the shape-file as a master list lies in its ability to provide precise and standardised administrative boundaries, enabling effective data identification, filtering, and visualisation. Here's an explanation of the significance of the shape-file in the RUSH platform:

  1. Accurate Administrative Boundaries:

    • The shape-file provides accurate and up-to-date administrative boundaries of Kenya, including the national, county, sub-county, and ward levels.
    • These boundaries define the jurisdictional divisions within the country and serve as a fundamental reference for assigning roles, managing data, and generating reports within the platform.
    • The accuracy of administrative boundaries ensures that data and administrative processes align with the established administrative hierarchy in Kenya.
  2. Data Identification and Filtering:

    • The shape-file enables efficient data identification and filtering based on administrative boundaries.
    • By associating data points with the corresponding administrative levels, the platform can retrieve and present data specific to a particular county, sub-county, or ward.
    • This functionality allows users to view, analise, and report on data at different administrative levels, facilitating targeted decision-making and resource allocation.
  3. Visualisation and Geographic Context:

    • The shape-file serves as the basis for visualising data on maps within the RUSH platform.
    • By overlaying data on the accurate administrative boundaries provided by the shapefile, users can visualise the distribution of sanitation and hygiene indicators across different regions of Kenya.
    • This geo-spatial visualisation enhances understanding, supports data-driven decision-making, and aids in identifying geographic patterns and disparities.
  4. Data Consistency and Standardisation:

    • The shape-file, being a standardised and authoritative source, ensures consistency and uniformity in defining administrative boundaries across the platform.
    • It provides a reliable reference that aligns with the official administrative divisions recognised by the Ministry of Health and other relevant authorities.
    • The use of a consistent and standardised master list facilitates data aggregation, analysis, and reporting, ensuring reliable and comparable insights.

The shape-file sourced from the Ministry of Health should provide accurate administrative boundaries, supports data identification and filtering, enables geo-spatial visualisation, and ensures data consistency and standardisation. By utilising the shape-file as the master list, the platform can effectively manage administrative processes, present data in a meaningful geographic context, and contribute to evidence-based decision-making for improved sanitation and hygiene practices throughout Kenya.

The shape-file sourced from the Ministry of Health acts as a crucial master list within the RUSH platform.

Questionnaire Definitions and Form Management

In addition to the administrative levels, the RUSH platform relies on another important master-list that defines the questionnaires used within the system. The questionnaire definition plays a crucial role in capturing the necessary data points and structuring the information collection process. Managing and maintaining the questionnaire forms are essential before seeding them into the system. This section outlines the importance of questionnaire definitions and the process of form management in the RUSH platform.

  1. Questionnaire Definitions:

    • Questionnaire definitions define the structure, content, and data points to be collected during data entry.
    • These definitions specify the questions, response options, and any associated validations or skip patterns.
    • Questionnaire definitions determine the type and format of data that can be entered for each question.
    • These definitions ensure consistency and standardisation in data collection across the platform.
  2. Form Management:

    • Form management involves the creation, customisation, and maintenance of the questionnaire forms.
    • Before seeding the forms into the system, it is crucial to ensure their accuracy, completeness, and adherence to data collection standards.
    • Form management includes activities such as form design, validation rules setup, skip logic configuration, and user interface customisation.
    • It is important to conduct thorough testing and quality assurance to ensure that the forms function correctly and capture the required data accurately.
  3. Form Fixes and Updates:

    • As part of the form management process, it is essential to address any issues or errors identified during testing or from user feedback.
    • Form fixes and updates may involve resolving bugs, improving user interface elements, modifying question wording, or adjusting validation rules.
    • It is crucial to carefully test and validate the fixed forms to ensure that the changes are successfully implemented and do not introduce new issues.

It is important to note that form management is an iterative process that may involve continuous improvements and updates as new requirements, feedback, or changes in data collection standards arise. 

3rd-Party Services

The RUSH platform relies on certain third-party services to enhance its functionality and provide essential features. These services include Mailjet for email communication and optionally Cloud Bucket as a storage service. Here's an explanation of their significance:

  1. Mailjet:

    • Mailjet is utilised for seamless email communication within the RUSH platform.
    • It provides features such as email delivery, tracking, and management, ensuring reliable and efficient communication between system users.
    • Mailjet enables the platform to send notifications, reports, and other email-based communications to users, enhancing user engagement and system responsiveness.
  2. Cloud Bucket (Optional):

    • The RUSH platform offers the option to utilise Cloud Bucket, a cloud-based storage service, for storing data such as uploaded or downloaded Excel files.
    • Cloud Bucket provides a secure and scalable storage solution, allowing for efficient management of large data files.
    • By utilising Cloud Bucket, the platform offloads the burden of storing and managing data files from the host server, resulting in improved performance and scalability.
    • Storing data files in Cloud Bucket also enhances data availability, durability, and accessibility, ensuring seamless access to files across the platform.

The use of Cloud Bucket as a storage service is optional, and alternative storage solutions can be considered based on specific requirements and constraints of the RUSH platform.

Risks and Mitigation Strategies

The development and operation of the RUSH platform come with inherent risks that can impact its effectiveness, security, and usability. Identifying and addressing these risks through appropriate mitigation strategies is essential to ensure the smooth functioning and success of the platform. Here are some key risks associated with the RUSH platform and their corresponding mitigation strategies:

Data Security and Privacy Risks

Risk: Unauthorised access, data breaches, or misuse of sensitive information.
Mitigation: Implement robust security measures, such as encryption, access controls, and regular security updates. Conduct thorough security audits, provide user education on data security best practices, and ensure compliance with data protection regulations.

Technical Risks

Risk: System failures, infrastructure disruptions, or performance bottlenecks.
Mitigation: Employ redundant and scalable infrastructure to minimise single points of failure. Regularly monitor system performance, conduct load testing, and implement disaster recovery plans. Update software and hardware components to address vulnerabilities and ensure optimal performance.

Data Quality Risks

Risk: Inaccurate, incomplete, or unreliable data affecting decision-making processes.
Mitigation: Implement data validation mechanisms, enforce data entry standards, and provide user training on data collection best practices. Conduct regular data quality checks and provide feedback loops to data entry staff for improvement. Collaborate with data providers to improve data accuracy and completeness.

User Adoption and Engagement Risks

Risk: Low user adoption, resistance to change, or lack of engagement with the platform.
Mitigation: Conduct user needs assessments, involve stakeholders in the platform's design and development process, and provide comprehensive user training and support. Highlight the benefits and value of the platform to promote user adoption and engagement. Continuously gather user feedback and iterate on the platform based on user needs and preferences.

Stakeholder Collaboration Risks

Risk: Limited collaboration and data sharing among stakeholders.
Mitigation: Foster strong partnerships with relevant agencies, organisations, and institutions. Promote a culture of collaboration, sharing best practices, and jointly addressing common challenges. Establish clear data sharing agreements and protocols to encourage stakeholder participation and data contribution.

Resource Risks

Risk: Insufficient resources (human, or technical) for platform development and maintenance.
Mitigation: Develop realistic resource plans and secure adequate funding for the platform's implementation and ongoing operation. Optimise resource allocation, prioritise critical features and functionalities, and leverage partnerships to share resources and expertise.


Regular risk assessments, monitoring, and proactive risk management practices should be integrated into the platform's lifecycle to identify emerging risks and implement appropriate mitigation strategies. 

Implementation Plan

The implementation plan for the RUSH platform involves a structured approach to ensure successful development and deployment. The plan includes tasks, timelines, and resource requirements, taking into account the available team members. Here's an outline of the implementation plan:

Task Breakdown

  1. Analise requirements and finalise specifications.
  2. Design the system architecture and database schema.
  3. Develop the back-end functionality, including data management, API integration, and authentication.
  4. Implement the front-end components, including user interface design, data visualisation, and user interactions.
  5. Integrate and test the front-end and back-end components for seamless functionality.
  6. Implement security measures, including JWT authentication and secure data handling.
  7. Conduct thorough testing, including unit tests, integration tests, and user acceptance testing.
  8. Refine and optimise performance for data processing and visualisation.
  9. Prepare documentation, including user guides, API documentation, and system architecture documentation.
  10. Plan and execute the deployment strategy on the Google Cloud Platform.

Timelines

  1. Analise requirements and finalise specifications: 1 week
  2. System architecture and database schema design: 1 week
  3. Back-end development: x weeks
  4. Front-end development: x weeks
  5. Integration and testing: x weeks
  6. Security implementation: x weeks
  7. Thorough testing and optimisation: x weeks
  8. Documentation preparation: 1 week
  9. Deployment on the Google Cloud Platform: 1 week

Resource Requirements

  1. 2 Back-end Developers: Responsible for back-end development, API integration, and database management.
  2. 2 Front-end Engineers: Responsible for front-end development, user interface design, and data visualisation.
  3. 1 Project Supervisor: Oversees the project, provides guidance, and ensures adherence to requirements, timelines and Pull Request reviews.
  4. 1 Project Manager: Manages the project's overall progress, coordinates resources, and communicates with stakeholders.
  5. 1 Dev-ops Engineer: Handles deployment, infrastructure setup, and configuration on the Google Cloud Platform.

The team members work collaboratively to ensure timely completion of tasks, quality assurance, and adherence to project milestones. Regular communication, coordination, and agile project management practices contribute to effective resource utilisation and smooth implementation.

It is important to note that the timelines provided are estimates and can be adjusted based on the complexity of the project, team dynamics, and any unforeseen challenges that may arise during implementation.

Communication and Task Management

To facilitate efficient communication and task management within the team, the RUSH platform utilises Slack and Asana. These tools play crucial roles in enabling effective collaboration, communication, and task tracking.

Document Management

For document management, the RUSH platform utilises Google Drive. Team members can use Google Drive to store and manage various project documents, including design specifications, meeting minutes, reports, and other relevant files.

Report Hierarchy

The RUSH project follows a hierarchical reporting structure to ensure efficient communication and progress tracking. The hierarchy is designed to provide clear lines of reporting and facilitate effective decision-making. Here's an overview of the report hierarchy:

  1. Team Members

    • Back-end Developers, Front-end Engineers, and Dev-ops Person directly report their progress, challenges, and updates to the Project Supervisor and Project Manager.
    • They communicate their completed tasks, pending work, and any obstacles they encounter during their development and deployment activities with Asana.
  2. Project Supervisor

    • The Project Supervisor oversees the technical aspects of the project.
    • They provide guidance, support, and technical expertise to the team members.
    • The Project Supervisor break down all the tasks in Asana and assign to team members with due date.
    • The Project Supervisor works closely with the Project Manager to ensure alignment with project goals and timelines.
  3. Project Manager

    • The Project Manager is responsible for the overall management and coordination of the RUSH project.
    • They track the progress of the development, monitor task completion, and manage resources and timelines.
    • The Project Manager communicates project updates, risks, and milestones to stakeholders and ensures effective collaboration among team members.

Regular meetings, such as stand-ups and sprint reviews, are conducted to discuss progress, address challenges, and align efforts across the team. This reporting hierarchy ensures effective communication, progress tracking, and efficient decision-making throughout the development and deployment phases of the RUSH platform.

Documentation References

The RUSH platform utilises various documentation references to provide comprehensive and accessible documentation for users and developers. These references include:

  1. Swagger:

    • Swagger is used to generate interactive API documentation for the RUSH platform's Restful APIs.
    • By utilising the Open-API Specification, Swagger automatically generates detailed API documentation, including endpoint descriptions, request examples, and response details.
    • The Swagger documentation serves as a valuable resource for API consumers, facilitating seamless integration and understanding of the available endpoints and their functionality.
  2. GitHub Wiki:

    • The RUSH platform leverages GitHub Wiki as a documentation reference for storing and presenting project-related information.
    • The GitHub Wiki provides a collaborative space for developers to create and maintain documentation directly within the project's repository.
    • It allows for the organisation of documentation pages, versioning, and easy navigation, ensuring that the latest project information is readily available to team members and contributors.
  3. DBDocs.io:

    • DBDocs is utilised to generate comprehensive documentation for the RUSH platform's database schema and structure.
    • DBDocs automatically extracts information from the database and generates clear and well-structured documentation.
    • The DBDocs documentation serves as a valuable reference for understanding the database design, relationships, and entity attributes.
  4. ReadTheDocs:

    • ReadTheDocs is employed to host and present user and developer documentation for the RUSH platform.
    • ReadTheDocs allows for the creation of user-friendly and searchable documentation, making it easy for users to find the information they need.
    • It provides a centralised location for storing and organising documentation, ensuring that both technical and non-technical users can access the necessary resources.

These documentation references, including Swagger, GitHub Wiki, DBDocs.io, and ReadTheDocs, play integral roles in providing comprehensive, organised, and accessible documentation for the RUSH platform. By utilising these resources, the platform ensures that users, developers, and API consumers have the necessary information to effectively utilise and contribute to the platform.

Conclusion

The development of the RUSH platform involves a comprehensive low-level design (LLD) that encompasses various aspects, including its purpose, functional overview, user roles, administrative levels, dependencies, security considerations, testing strategies, and deployment plan. Through meticulous planning and consideration of these factors, the RUSH platform aims to address sanitation and hygiene challenges in rural and urban areas of Kenya effectively.

The platform's purpose is to provide real-time monitoring, information aggregation, and data analysis to support decision-making and improve sanitation and hygiene practices. With its capabilities such as data visualisation, questionnaire management, and user role administration, the RUSH platform empowers stakeholders at different administrative levels to make informed decisions and take appropriate actions.

The LLD also highlights the importance of master lists, including administrative levels and questionnaire definitions, which serve as crucial references for data management, user roles, and system operations. Additionally, the security considerations, testing strategies, and dependency management outlined in the LLD ensure robustness, performance, and reliability of the platform.

The deployment strategy leverages Google Cloud Platform, utilising containerisation with GKE, storing container images in the Container Registry, and employing services like CloudSQL, Cloud Storage Bucket, Ingress, Load Balancers, and Cloud DNS. The implementation plan provides a timeline, task breakdown, and resource requirements, allowing for efficient coordination and progress tracking.

Furthermore, the RUSH platform embraces effective communication and task management through the use of Slack and Asana, enabling seamless collaboration and efficient project execution. The documentation references, including Swagger, GitHub Wiki, DBDocs, and ReadTheDocs, facilitate comprehensive documentation and knowledge sharing among the team.

In conclusion, the RUSH platform's LLD serves as a foundational guide for its development, emphasise the importance of functionality, data management, security, testing, deployment, communication, and documentation. By adhering to this comprehensive design, the RUSH platform aims to make significant contributions to improving sanitation and hygiene practices, ultimately leading to better health outcomes in rural and urban areas of Kenya.

2023 New Features

UI Branding

Migrating Panels to Sidebar Menu

canvas.png

Figure 1: New Control Center with Sidebar

Previous Implementation Overview

The previous implementation of the user interface in the application primarily revolved around a panel-based design complemented by a tabbed navigation system. This approach was characterized by distinct sections within the main panel, where each section or page had its own set of tabs for detailed navigation. Here's a closer look at the key features of this previous implementation:

  1. Panel-Based Layout:

    • The interface was structured around main panels, each representing a major functional area of the application.
    • These panels served as the primary means of navigation and content organization, providing users with a clear view of the available options and functionalities.
  2. Tabbed Navigation:

    • Within each panel, a tabbed interface was used to further categorize and compartmentalize information and features.
    • The UserTab component, for instance, was a pivotal element in this design, allowing for the segregation of different user-related functionalities like Manage Data, User Management or Approval Panel.
  3. Role-Based Access: The navigation elements, both panels and tabs, were dynamically rendered based on the user’s role and permissions. This ensured that users accessed only the features and information pertinent to their roles.

  4. Content Organization: The content within each panel was organized logically, with tabs providing a secondary level of content segregation. This allowed users to navigate large amounts of information more efficiently.

  5. User Interaction: Interaction with the interface was primarily through clicking on various panels and tabs. The UI elements were designed to be responsive to user actions, providing immediate access to the content.

  6. Aesthetic and Functional Consistency: The previous design maintained a consistent aesthetic and functional approach across different panels and tabs, ensuring a cohesive user experience.

  7. Responsive Design: While the design was primarily desktop-focused, it included responsive elements to ensure usability across various screen sizes.

  8. State Management and URL Routing: The application managed the state of active panels and tabs, with URL routing reflecting the current navigation path. This was crucial for bookmarking and sharing links.

canvas.png

Figure 2: Previous Control Center

Key Considerations

The redesign of an application's user interface to incorporate a sidebar-based layout with expandable content requires a strategic and thoughtful approach. This transition aims to enhance the desktop user experience by offering a more intuitive and organized navigation system. These considerations will guide the development process, ensuring that the final product efficiently and effectively meets user needs. Below is a list of these key considerations:

1. Navigation Hierarchy and Structure:

2. User Role and Access Control:

3. State Management and URL Routing:

4. User Experience and Interaction:

5. Content Organization and Layout:

6. Performance Considerations:

7. Testing and Validation:

Example Ant-design implementation of sidebar component: https://ant.design/~demos/components-layout-demo-side

User Access Overview

const config = {
...
roles: [
  {
    id: 1,
    name: "Super Admin",
    filter_form: false,
    page_access: [
      ...
      "visualisation",
      "questionnaires",
      "approvals",
      "approvers",
      "form",
      "reports",
      "settings",
      ...
    ],
    administration_level: [1],
    description:
      "Overall national administrator of the RUSH. Assigns roles to all county admins",
    control_center_order: [
      "manage-user",
      "manage-data",
      "manage-master-data",
      "manage-mobile",
      "approvals",
    ],
  },
  ...
],
checkAccess: (roles, page) => {
  return roles?.page_access?.includes(page);
},
...
}

Source: https://github.com/akvo/rtmis/blob/main/frontend/src/lib/config.js

  1. Roles Array:

    • The roles array within config defines different user roles in the system. Each role is an object with specific properties.
    • Example Role Object:
      • id: A unique identifier for the role (e.g., 1 for Super Admin).
      • name: The name of the role (e.g., "Super Admin").
      • filter_form: A boolean indicating whether the role has specific form filters (e.g., false for Super Admin).
      • page_access: An array listing the pages or features the role has access to (e.g., "visualisation", "questionnaires", etc. for Super Admin).
      • administration_level: An array indicating the level(s) of administration the role pertains to (e.g., [1] for national level administration for Super Admin).
      • description: A brief description of the role (e.g., "Overall national administrator of the RUSH. Assigns roles to all county admins" for Super Admin).
      • control_center_order: An array defining the order of items or features in the control center specific to the role.
  2. Check Access Function:

    • checkAccess is a function defined within config to determine if a given role has access to a specific page or feature.
    • It takes two parameters: roles (the role object) and page (the page or feature to check access for).
    • The function returns true if the page_access array of the role includes the specified page, indicating that the role has access to that page.
    • Example Usage of checkAccess:
      • λ ag config.checkAccess
        pages/profile/components/ProfileTour.jsx
        19:    ...(config.checkAccess(authUser?.role_detail, "form")
        28:    ...(config.checkAccess(authUser?.role_detail, "approvals")
        
        pages/settings/Settings.jsx
        29:    config.checkAccess(authUser?.role_detail, p.access)
        
        pages/control-center/components/ControlCenterTour.jsx
        14:    ...(config.checkAccess(authUser?.role_detail, "data")
        29:    config.checkAccess(authUser?.role_detail, "form")
        38:    ...(config.checkAccess(authUser?.role_detail, "user")
        48:    config.checkAccess(authUser?.role_detail, "form")
        57:    ...(config.checkAccess(authUser?.role_detail, "approvals")
        
        components/layout/Header.jsx
        74:      {config.checkAccess(user?.role_detail, "control-center") && (
        
Usage and Implications

Master Data Management

Figure 3: Administration and Entities Hierarchy

User Interactions

Add / Edit Administration Attribute

API: administration-endpoints

Add / Edit Administration

API: administration-endpoints

The option names for the Level are situated between the National and Lowest levels. The inclusion of the National Level is not feasible, as it would result in the appearance of more than two countries, rendering the selection of a parent level logically null. While the addition of the Lowest Level is achievable, it is necessary to inhibit the display of the last cascade option to ensure that any newly added administration does not have an undefined level.

Add / Edit Entity

API: entity-endpoints

Add / Edit Entity Data

API: entity-data-endpoints

Administration / Entity Attribute Types

Option & Multiple Option Values

Use Case

We have a dataset that contains categorical information about the types of land use for various regions. This data will be utilized to classify and analyze land use patterns at the county level.

Feature

To achieve this, we will need to define option values for an attribute. In this scenario, the workflow is as follows:

Define Attribute

  • Attribute Name: Land Use Type
  • Attribute Code: <Unique Identifier>Land_Use_Type
  • Type: Categorical (Option Values)
  • Administration Level: County

Define Option Values

  • Option Name: Residential
    • Option Code: Residential
  • Option Name: Commercial
    • Option Code: Commercial
  • Option Name: Agricultural
    • Option Code: Agricultural

Upload Data for Counties

County Attribute Code Value
County A Land_Use_Type Residential
County B Land_Use_Type Commercial
County C Land_Use_Type Agricultural

In this case, we define the "Option Values" for the "Land Use Type" attribute, allowing us to categorize land use patterns at the county level. The actual data for individual counties is then uploaded using the defined options.

Single Numeric Values

Use Case

We possess household counts from the 2019 census that correspond to the RTMIS administrative list at the sub-county level. This data can be employed to compute the household coverage per county, which is calculated as (# of households in that sub-county in RTMIS / # from the census).

Feature

To achieve this, we need to store the population value for individual sub-counties as part of their attributes. In this scenario, the workflow is as follows:

Define Attribute

Upload Data for Individual Sub-Counties

Sub-County Attribute Code Value
CHANGAMWE Census_HH_Count 46,614
JOMVU Census_HH_Count 53,472

In this case, the values for the county level will be automatically aggregated.

Disaggregated Numeric Values

Use Case

We aim to import data from the CLTS platform or the census regarding the count of different types of toilets, and we have a match at the sub-county level. This data will serve as baseline values for visualization.

Feature

For this use case, we need to store disaggregated values for an attribute. To do so, we will:

Define the Attribute

Upload Data for Individual Sub-Counties

Sub-County Attribute Code Disaggregation Value
CHANGAMWE Census_HH_Toilet_Count Improved 305,927
CHANGAMWE Census_HH_Toilet_Count Unimproved 70,367

Database Overview

Entities Table

pos table column null dtype len default
1 Entities id   Integer    
2 Entities name   Text    

Entity Data Table

pos table column null dtype len default
1 Entity Data id   Integer    
2 Entity Data entity_id   Integer    
3
Entity Data
code
Yes
Text


4 Entity Data name   Text    
5 Entity Data administration_id   Integer    

Entity Attributes

pos table column null dtype len default
1 Entity Attributes id   Integer    
2 Entity Attributes entity_id   Integer    
3 Entity Attributes name   Text    

Entity Attributes Options

pos table column null dtype len default
1 Entity Attributes Options id   Integer    
2 Entity Attributes Options entity_attribute_id   Integer    
3 Entity Attributes Options name   Text    

Entity Values

pos table column null dtype len default
1 Entity Values id   Integer    
2 Entity Values entity_data_id   Integer    
3 Entity Values entity_attribute_id   Integer    
4 Entity Values value   Text    

Administration Table

pos table column null dtype len default
1 administrator id NO bigint   administrator_id_seq
2 administrator code YES character varying 255  
3 administrator name NO text    
4 administrator level_id NO bigint    
5 administrator parent_id YES bigint    
6 administrator path YES text    

Administration Attributes

pos table column null dtype len default
1 Administration Attributes id   Integer    
2 Administration Attributes level_id   Integer    
3
Administration Attribute
code

Text

Unique (Auto-Generated)
4 Administration Attributes Type   Enum (Number, Option, Aggregate)
   
5 Administration Attributes name   Text    

Administration Attributes Options

pos table column null dtype len default
1 Administration Attributes Options id   Integer    
2 Administration Attributes Options administration_attributes_id   Integer    
3 Administration Attributes Options name   Text    

Administration Values

pos table column null dtype len default
1 Administration Values id   Integer    
2 Administration Values administration_id   Integer    
3 Administration Values administration_attributes_id Integer      
4 Administration Values value   Integer    
5
Administrative Values
option

Text


Rules:

Validation for Option Type

Materialized View for Aggregation

Visualization Query

id type name attribute option value
1 administration Bantul Water Points Type Dugwell 1
2 entity Bantul School Type of school Highschool 1

API Endpoints

Administration Endpoints

Administration Create / Update (POST & PUT)
{
  "parent_id": 1,
  "name": "Village A",
  "code": "VA",
  "attributes": [{
      "attribute":1,
      "value": 200,
    },{
      "attribute":2,
      "value": "Rural",
    },{
      "attribute":3,
      "value": ["School","Health Facilities"],
    },{
      "attribute":4,
      "value": {"Improved": 100,"Unimproved": 200},
    }
  ]
}
Administration Detail (GET)
{
  "id": 2,
  "name": "Tiati",
  "code": "BT",
  "parent": {
    "id": 1,
    "name": "Baringo",
    "code": "B"
  },
  "level": {
    "id": 1,
    "name": "Sub-county"
  },
  "childrens": [{
    "id": 2,
    "name": "Tiati",
    "code": "BT"
  }],
  "attributes": [{
      "attribute":1,
      "type": "value",
      "value": 200,
    },{
      "attribute":2,
      "type": "option",
      "value": "Rural",
    },{
      "attribute":3,
      "type": "multiple_option",
      "value": ["School","Health Facilities"],
    },{
      "attribute":4,
      "type": "aggregate",
      "value": {"Improved": 100,"Unimproved": 200},
    }
  ]
}
Administration List (GET)

Query Parameters (for filtering data):

{
  "current": "self.page.number",
  "total": "self.page.paginator.count",
  "total_page": "self.page.paginator.num_pages",
  "data":[
    {
      "id": 2,
      "name": "Tiati",
      "code": "BT",
      "parent": {
        "id": 1,
        "name": "Baringo",
      },
      "level": {
        "id": 1,
        "name": "Sub-county"
      }
    }
]}
Administration Attributes CRUD (POST & PUT)
{
  "name": "Population",
  "type": "value",
  "options": []
}
Administration Attributes (GET)
[{
  "id": 1,
  "name": "Population",
  "type": "value",
  "options": []
},{
  "id": 2,
  "name": "Wheter Urban or Rural",
  "type": "option",
  "options": ["Rural","Urban"]
},{
  "id": 3,
  "name": "HCF and School Availability",
  "type": "multiple_option",
  "options": ["School","Health Care Facilities"]
},{
  "id": 4,
  "name": "JMP Status",
  "type": "aggregate",
  "options": ["Improved","Unimproved"]
}]

Entity Endpoints

Entity Create / Update (POST / PUT)
{
  "name": "Schools",
}
Entity List (GET)
{
  "current": "self.page.number",
  "total": "self.page.paginator.count",
  "total_page": "self.page.paginator.num_pages",
  "data":[
    {
      "id": 1,
      "name": "Health Facilities",
    },
    {
      "id": 2,
      "name": "Schools",
    }
]}

Entity Data Endpoints

Entity Data Create / Update (POST / PUT)
{
  "name": "Mutarakwa School",
  "code": "101",
  "administration": 1,
  "entity": 1
}
Entity Data List (GET)
{
  "current": "self.page.number",
  "total": "self.page.paginator.count",
  "total_page": "self.page.paginator.num_pages",
  "data":[
    {
      "id": 1,
      "name": "Lamu Huran Clinic",
      "code": "101",
      "administration": {
          "id": 111,
          "name": "Bura",
          "full_name": "Kenya - Tana River - Bura - Bura - Bura",
          "code": null
      },
      "entity": {
          "id": 1,
          "name": "Health Care Facilities"
      }
    },
]}

Bulk Upload

As an administrator of the system, the ability to efficiently manage and update administrative data is crucial. To facilitate this, a feature is needed that allows for the bulk uploading of administrative data through a CSV file. This CSV file format is generated based on administration level table and administrative attribute table. When downloading a template, system administrators are given the ability to choose what attributes they want to include in the template.

The CSV template, will contain columns representing all administrative levels (such as National, County, Sub-County, Ward, and Village) along with their respective IDs.  Additionally, it will include columns for selected attributes associated with each administrative unit, as defined in the administration attribute table.

Acceptance Criteria

CSV File Format and Structure
Optional Codes and Attributes
Data Validation and Integrity
User Feedback and Error Handling

Example CSV Template for Administration Data

County Sub-County Ward Village Population Whether_Urban_or_Rural HCF_and_School_Availability JMP_Status_Improved JMP_Status_Unimproved
Kitui Mwingi North Kyuso Ikinda 200 Rural School;Health Care Facilities 100 200
Kitui Mwingi North Kyuso Gai Central 150 Urban Health Care Facilities 120

180

Notes:

Bulk Upload Process

Example process:

from api.v1.v1_jobs.constants import JobTypes, JobStatus
from api.v1.v1_jobs.models import Jobs
from api.v1.v1_users.models import SystemUser

job = Jobs.objects.create(type=JobTypes.validate_administration,
                          status=JobStatus.on_progress,
                          user=request.user,
                          info={
                              'file': filename,
                          })
task_id = async_task('api.v1.v1_jobs.jobs.validate_administration',
                     job.id,
                     hook='api.v1.v1_jobs.job.seed_administration')
  1. Initiating the Bulk Upload Task:

    • When a bulk upload is initiated, the async_task function is called.
    • The function is provided with the task name 'api.v1.v1_jobs.job.validate_administration_data', which likely refers to a function responsible for validating the uploaded administration data.
  2. Passing Job ID to the Task:

    • Along with the task name, the job ID (job.id) is passed to the async_task function.
    • This job ID is used to associate the asynchronous task with the specific job record in the Jobs table.
  3. Task Execution and Hook:

    • The async_task function also receives a hook parameter, in this case, 'api.v1.v1_jobs.job.seed_administration_data'.
    • This hook is likely another function that is called after the validation task completes. It's responsible for seeding the validated administration data into the database.
  4. Task ID Generation:

    • The async_task function generates a unique task ID for the job. This task ID is used to track the progress and status of the task.
    • The task ID is likely stored in the Jobs table, associated with the corresponding job record.
  5. Monitoring and Tracking:

    • With the task ID, administrators can monitor and track the status of the bulk upload process.
    • The Jobs table provides a comprehensive view of each job, including its current status, result, and any relevant information.
  6. Error Handling and Notifications:

    • If the validation or seeding task encounters any errors, these are captured and recorded in the Jobs table.
    • The system can be configured to notify administrators of any issues, allowing for prompt response and resolution.
  7. Completion and Feedback:

    • Once the bulk upload task is completed (both validation and seeding), its final status is updated in the Jobs table.
    • Administrators can then review the outcome of the job and take any necessary actions based on the results.

Database Seeder

Administration Seeder

In the updated approach for seeding initial administration data, the shift from using TopoJSON to Excel file format is being implemented. While TopoJSON has been the format of choice, particularly for its geospatial data capabilities which are essential for visualization purposes, the move to Excel is driven by the need for a more flexible and user-friendly data input method.

However, this transition introduces potential challenges in maintaining consistency between the Excel-based administration data and the TopoJSON used for visualization. The inherent differences in data structure and handling between these two formats could lead to discrepancies, impacting the overall data integrity and coherence in the system. This change necessitates a careful consideration of strategies to ensure that the data remains consistent and reliable across both formats.

Key Considerations
Excel File Structure for Seeder

File Naming Convention

File Content Structure

Each file contains details of sub-counties and wards within the respective county.

Sub-County_ID Sub-County Ward_ID Ward
201 Westlands 301 XYZ
201 Westlands 302 ABC
... ... ... ...
Seeder Adaptation

Administration Attribute Seeder

Assumptions
Example Excel File Structure
Admin_ID Attribute1 Attribute2 ...
1 Value1 Value2 ...
2 Value1 Value2 ...
... ... ... ...
Seeder Script
import pandas as pd
from your_app.models import Administration, AdministrationAttribute

class AdministrationAttributeSeeder:
    def __init__(self, file_path):
        self.file_path = file_path

    def run(self):
        # Load data from Excel file
        df = pd.read_excel(self.file_path)

        # Iterate through each row in the DataFrame
        for index, row in df.iterrows():
            admin_id = row['Admin_ID']
            # Retrieve the corresponding Administration object
            administration = Administration.objects.get(id=admin_id)

            # Create or update AdministrationAttribute
            for attr in row.index[1:]:  # Skipping the first column (Admin_ID)
                attribute_value = row[attr]
                AdministrationAttribute.objects.update_or_create(
                    administration=administration,
                    attribute_name=attr,
                    defaults={'attribute_value': attribute_value}
                )

        print("Administration attributes seeding completed.")

# Usage
seeder = AdministrationAttributeSeeder('path_to_your_excel_file.xlsx')
seeder.run()

Note:

  1. File Path: Replace 'path_to_your_excel_file.xlsx' with the actual path to the Excel file containing the administration attributes, the excel files will be safely stored in backend/source.
  2. Model Structure: This script assumes the existence of Administration and AdministrationAttribute models. Adjust the script according to your actual model names and structures.
  3. update_or_create: This method is used to either update an existing attribute or create a new one if it doesn't exist.
  4. Error Handling: Add appropriate error handling to manage cases where the administration ID is not found or the file cannot be read.

Task Scheduler

The system needs to perform scheduled tasks periodically such as backups, report generation, and so on. Cron expression is a familiar format used to configure scheduled tasks to run periodically. Using the Cron expression in the Task Scheduler is the prefered approach.

Django Q has a feature to run scheduled tasks and can be used to implement the Task Scheduler. With Croniter package it can support cron expression.

Configuration

Use django settings to configure the Task Scheduler. Example:

SCHEDULED_TASKS = {
    "task name" : {
        "func": "function_to_run",
        "cron": "* * * * *",
        "kwargs": {
            "hook": "post_function_to_run"
        }
    },
}

The task attributes (func, cron. ...) is a dictionary object representation of the Django Q schedule parameters.

Configuration update synchronization

The Task Scheduler configuration must support adding new tasks, deleting tasks, and changing task parameters. The command to synchronize configuration updates needs to be implemented. This command will be run on Django startup to apply configuration changes.

from django_q.models import Schedule


def sync_scheduled_tasks():
  schedules = get_setting_schedules()
  existing_schedules = list(Schedule.objects.all())
  actions = calculate_schedule_changes(schedules, existing_schedules)
  apply_sync_actions(actions)


class SyncActions:
  to_add: List[Schedule]
  to_modify: List[Schedule]
  to_delete: List[Schedule]


def get_setting_schedules() -> List[Schedule]:
  """
  Converts the schedules configuration in the app settings to django-q
  schedule objects
  """
  ...


def calculate_schedule_changes(
  schedules: List[Schedule], existing_schedules: List[Schedule]
) -> SyncActions:
  """
  Calculates the operations that have to be taken in order to sync the schedules
  in the settings with the existing schedules in the db
  """
  ...


def apply_sync_actions(actions: SyncActions):
  """
  Applies the operations required to sync the schedules in the settings with the
  schedules in the DB
  """
  ...

List of scheduled tasks

Entity Type of Question

How to Achieve Entity Type of Question

To achieve an entity type of question, we need to ensure that the question type is supported in both web forms and mobile applications. We should consider the question format, ensuring alignment with akvo-react-form, and verify that the attributes can be stored in the database. For this case, we will use a type cascade with an additional attribute for further classification.

Handling Existing Cascade Type of Question

As mentioned earlier, we will use an extra attribute to manage existing cascade-type questions, if the cascade type does not have extra attributes and not providing an API endpoint, then the entity cascade will not work.

Provide API attribute for Entity Cascade

Implementing an API attribute for Entity Cascade is a significant enhancement aimed at improving the functionality of web forms. This feature involves adding an API attribute at the question level within a questionnaire and defining it as an object. The primary purpose of this object is to store the API URL, which is crucial for enabling the Entity Cascade functionality. This should be done as follows:

{
  "api": {
    "endpoint": "<API URL here>"
  }
}

The format for the response can be found at the following URL: 

https://raw.githubusercontent.com/akvo/akvo-react-form/main/example/public/api/entities/1/13

Extra attribute for Entity Cascade
Attribute
Value
type

"entity" 

This aims to identify on the backend that we will use entity table to filter entity data and send SQLite files to the mobile app

name

Use existing entity names and fill them exactly as they are in the database to prevent data from not being found

https://wiki.cloud.akvo.org/link/65#bkmrk-entities-table

parentId

Set the question source ID to trigger a list of entities to appear based on the answer to the question. If the questionnaire is filled out via a Webform, the entities will appear from the API response. The entities will appear from the SQL query results if the questionnaire is filled out via a Mobile app.

Example
{
  "id": 67,
  "label": "School cascade",
  "name": "school_cascade",
  "type": "cascade",
  "required": false,
  "order": 7,
  "api": {
    "endpoint": "https://akvo.github.io/akvo-react-form/api/entities/1/"
  },
  "extra": {
    "type": "entity",
    "name": "School",
    "parentId": 5
  }
},
BACKEND changes

We need to modify the form details response by changing this file to retrieve the SQLite file based on the extra type attribute

https://github.com/akvo/rtmis/blob/main/backend/api/v1/v1_forms/serializers.py#L322-L331

for cascade_question in cascade_questions:
  if cascade_question.type == QuestionTypes.administration:
    source.append("/sqlite/administrator.sqlite")
  if (
    cascade_question.extra and
    cascade_question.extra.get('type') == 'entity'
  ):
    source.append("/sqlite/entity_data.sqlite")
  else:
    source.append("/sqlite/organisation.sqlite")
return source

https://github.com/akvo/rtmis/blob/main/backend/api/v1/v1_forms/serializers.py#L198-L216

def get_source(self, instance: Questions):
    user = self.context.get('user')
    assignment = self.context.get('mobile_assignment')
    if instance.type == QuestionTypes.cascade:
        if instance.extra:
            cascade_type = instance.extra.get("type")
            cascade_name = instance.extra.get("name")
            if cascade_type == "entity":
                # Get entity type by name
                entity_type = Entity.objects.filter(name=cascade_name).first()
                entity_id = entity_type.id if entity_type else None
                return {
                    "file": "entity_data.sqlite",
                    "cascade_type": entity_id,
                    "cascade_parent": "administrator.sqlite"
                }
    # ... the rest of the code

The backend response will be

{
   ...
    "source": {
      "file": "entity_data.sqlite",
      "cascade_type": 1,
      "cascade_parent": "administrator.sqlite"
    }
}

Mobile Handler for Entity Type of Question

Once the mobile application can read the entity SQLite file, we can execute a filtering query based on the selected administration.

Test cases
Store selected administration

We need to store the selected administration to quickly retrieve the parent of the entity cascade. Once the administration is selected, the related entity list should be made available.

To achieve this, we can add a new global state called `administration` and set its value using the onChange event in the TypeCascade component.

Modify initial cascade

Change how the dropdown data is initialized by checking thecascadeParent from the source value. If 'cascadeParent' exists, use it as a parameter to retrieve the selected administration as the parent ID. Otherwise, obtain the parent from the 'parent_id' value.

To filter entity types, we can utilize the 'cascadeType' property to display a list of relevant entities with previously defined extra attributes. The implementation will look as follows:

https://github.com/akvo/rtmis/blob/main/app/src/form/fields/TypeCascade.js#L115-L134

const parentIDs = cascadeParent === 'administrator.sqlite' ? prevAdmAnswer || [] : parentId || [0];
const filterDs = dataSource
  ?.filter((ds) => {
    if (cascadeParent) {
      return parentIDs.includes(ds?.parent);
    }
    return (
      parentIDs.includes(ds?.parent) ||
      parentIDs.includes(ds?.id) ||
      value?.includes(ds?.id) ||
      value?.includes(ds?.parent)
    );
  })
  ?.filter((ds) => {
    if (cascadeType && ds?.entity) {
      return ds.entity === cascadeType;
    }
    return ds;
  });

Grade Determination Process

Grade Claim

The Sub-County or Ward PHO opens a Grade Determination process by claiming that a community has reached a G level. A team is assembled to collect data in all households and at the community level. The collected data is associated with the Grade Determination process, i.e. it is not stored alongside the routine data. Specific questions could be added to the Community form to reinforce the accountability of PHOs in claiming a grade. Ex:

The collected data does not need to go through the data approval workflow the routine data is subject to. Based on the collected data, the Sub-County or Ward the PHO can decide to submit the claim for approval to the Sub-County PHO or to cancel it. 

The platform computes and displays the % completion of the data collection activity associated with the Grade Determination process (the number of households of a community - denominator - is collected in the community form). A % completion below 100% does not prevent the Sub-County or Ward the PHO from submitting the claim for approval.

Features

Claim Certification

Claim certification is done by doing another round of data collection on a sampled number of households per candidate communities. The collected data does not need to go through the data approval workflow the routine data is subject to. The collected data goes to a different bucket than the routine data. The data collection is performed by staff of a different Sub-County a community belongs to. The data collection is done in batches: a team will plan and perform the data collection for multiple communities. The County PHO is in charge of creating the batches and to assign them to the Sub-County PHO that will later put together a team of data collectors. Candidate Communities are expected to be assigned to a batch within two months of being approved for the certification process.

Specific sampling rules apply:

Based on the data collected, the County PHO can decide to:

The users are able to see the outcomes for which the targeted level was not reached in order to provide feedback to the community.

Features

 

Mobile Application

Introduction

The Mobile Application for Real-Time Management Information System (RTMIS) plays a pivotal role in facilitating remote data collection, primarily designed to support offline data submission for enumerators. Enumerators, who are an integral part of the data collection process, are assigned the responsibility of collecting critical information beyond the scope of Data Collectors. This mobile application serves as an indispensable tool, equipping enumerators with the means to efficiently gather data, even in areas with limited or no connectivity.

The Mobile Application for Real-Time Management Information System (RTMIS) is built upon a module derived from the National Management Information System (NMIS) Mobile Application (https://github.com/akvo/nmis-mobile). The NMIS Mobile App serves as a generic data collection tool designed to accommodate the needs of multiple services and organizations.

Within this context, the RTMIS Mobile Application takes center stage as a specialized module tailored to support the unique requirements of real-time data collection for management information. Specifically crafted to cater to the demands of the RTMIS, this mobile application empowers enumerators and data collectors with a targeted set of features and functionalities.

Requirements

Initial Setup

  1. Setup New Expo Application:

    • Create a new Expo application as a foundation for the RTMIS Mobile App.
    • Configure the Expo environment with the necessary dependencies.
  2. Integration from nmis-mobile Repository:

    • Copy the entire app folder from the nmis-mobile repository to the RTMIS repository.
    • Ensure that the integration includes all relevant code, assets, and configurations.
    • Make the necessary modifications to the module to align it with the specific requirements and functionalities of the RTMIS back-end.
  3. Docker Compose Setup for Development:

    • Implement Docker Compose setup to enable seamless development of the Mobile App within the RTMIS project.
    • Integrate the Mobile App into the RTMIS development environment to ensure compatibility and ease of testing.
  4. Authentication Method Enhancement:

    • Implement changes to introduce a new and improved authentication method for the RTMIS Mobile App.
    • Ensure that the new authentication method aligns with the security requirements and standards of the RTMIS project.
    • Update relevant documentation and user instructions to reflect the changes.
  5. CI/CD Setup for Mobile App Deployment:

    • Establish a robust CI/CD pipeline for the RTMIS Mobile App, enabling automated deployment to the Expo platform.
    • Configure the pipeline to trigger builds and deployments based on code changes and updates to the Mobile App repository.
    • Ensure that the CI/CD setup includes proper testing and validation procedures before deploying to Expo
  6. Integration of Django Mobile Module:
    • Incorporate the Django mobile module from the National Wash MIS repository folder: v1_mobile into the RTMIS back-end.

Overview

To support the integration of the mobile application, several critical updates are required for both the RTMIS platform's back-end and front-end components. These modifications encompass a range of functionalities designed to seamlessly accommodate the needs of the mobile application. Key updates will include, but are not limited to:

1. Back-end

  1. Authentication and Authorization API for Mobile Users:

    • Integrate automated pass-code generation functionality to generate unique 6-digit alpha-numeric pass-codes for multiple mobile data collector assignment.
    • Establish an API mechanism to authenticate and authorize mobile users based on a pass-code. This ensures secure access to the RTMIS platform while simplifying user management for mobile data collectors.
  2. Form List and Cascade Retrieval API:

    • Develop Cascade SQLite generator for both Entities and Administration.
    • Implement an API that enables the mobile application to retrieve forms and cascades from the RTMIS platform. This functionality is vital for data collection activities performed by enumerators and data collectors in the field.
  3. Data Monitoring API:

    • Modify data/batch submission-related models and API to support monitoring submission.
    • Modify approval workflow-related models and API to support monitoring submission.
  4. Data Synchronisation API:

    • Make the necessary modifications to the v1_mobile module to align it with the specific requirements and functionalities of the RTMIS back-end:
      • Preload existing data-points.
      • Modify Mobile Form submission-related models and API to support monitoring submission.
  5. Data Entry Staff Data Editing and Approval Workflow:

    • Develop functionality for Data Entry Staff to add Mobile Assignments. The Data Entry Staff user can have multiple mobile assignments, which will require village ID and form ID. When a mobile assignment is created, it will generate a pass-code that will be used by Enumerators to collect data in the field via the Mobile App.
    • Develop functionality for Data Entry Staff to edit data submitted via the mobile application.
  6. Form Updates:
    • Develop New Question Type: Data-point Question
    • New Question Parameters: Display Only

2. Front-end

  1. Dedicated "Mobile Data Collectors" Section:

    • Create a dedicated section within the RTMIS front-end, labeled "Mobile Data Collectors," where Data Entry Staff can easily access and manage mobile data collector assignments.
  2. "Add Mobile Data Collector" Feature:

    • Implement a user-friendly feature within the "Mobile Data Collectors" section that allows Data Entry Staff to initiate the process of adding mobile data collectors.
  3. Assignment Details Form:

    • Develop a user-friendly form that Data Entry Staff can use to input assignment details:
      • the name of the assignment
      • Level (for scoping the administration selection)
      • Multiple Administration village selection
      • and form(s) selection.
    • Once the Data Entry Staff presses "create," the back-end will process it and return a 6-digit Alphanumeric code that will be used for mobile authentication.
  4. Communication of Pass-codes:

    • Provide a mechanism within the front-end that allows Data Entry Staff to easily communicate the generated pass-codes to the respective mobile data collectors.
  5. User Guidance (RTD Updates):

    • Include user guidance elements and feedback mechanisms in the front-end to assist Data Entry Staff throughout the process, ensuring that they understand the workflow and status of each assignment.

3. Mobile App

  1. Mobile App User Schema:
    • Modify Authentication Method
  2. Mobile Database Modification
  3. Mobile UI Modification
    • Develop a screen where user can see and sync the list of existing data-points
    • Develop a screen where user can choose to add new data-point or edit existing data-point
      • NMIS-sync.png
    • Read more: Mobile UI Modification

Back-end

Back-end Database Migrations

Mobile Assignment Schema

1. Mobile Group (PENDING)
pos column null dtype len default
1 id No Integer - (Auto-increment)
3 name No Text 6 -
4
created_by




2. Mobile Assignment Table
pos column null dtype len default
1 id No Integer - (Auto-increment)
2
name No Text 255 -
3
passcode No Text 6 (Auto-generated)
4
token
No
Text
255
JWT String
5
created_by
No
Integer
-
(Primary Key)

Explanation: The MobileAssignment table stores information about mobile data collector assignments. The id column serves as the primary key and a unique identifier for each assignment. The name column holds the assignment's name or description, while the passcode column stores a unique pass-code for mobile data collector access.

3. Mobile Assignment Form Administration Table (Junction):
pos column null dtype len default
1 id No Integer - (Auto-increment)
2 assignment_id No Integer - -
3 form_id No Integer - -
4 administration_id No Integer - -

Explanation: This table serves as a junction table that establishes the many-to-many relationship between mobile  assignments (MobileAssignment), forms (form_id), and administrative level (administration_id). The id column remains as the primary key, and the other columns associate the rows with the respective assignment, form, and administrator.

Current Schema Updates

1. Data-point Table
pos table column null dtype len default
1 data id NO bigint   data_id_seq
2 data form_id NO bigint    
3
data
administration_id
NO
bigint


4 data name NO text    
5 data geo YES jsonb
 
6 data created NO datetime    
7 data updated YES datetime    
8 data created_by_id NO bigint    
9 data updated_by_id YES bigint    
10
data
uuid
NO
uuid

uuid.uuid4
1. Question Table
pos table column null dtype len default
1 question id NO bigint   question_id_seq
2 question order YES bigint    
3 question text NO text    
4 question name NO character varying 255  
5 question type NO int    
6 question meta NO bool    
7 question required NO bool    
8 question rule YES jsonb    
9 question dependency YES jsonb    
10 question form_id NO bigint    
11 question question_group_id NO bigint    
12 question api YES jsonb    
13 question extra YES jsonb    
14
question
tooltip
YES
jsonb


15
question
fn
YES
jsonb


16
question
display_only
YES
bool


17
question
meta_uuid
YES
bool


18
question
disabled
YES
jsonb


18
question
hidden
YES
jsonb


2. Option Table
pos table column null dtype len default
1 option id NO bigint   option_id_seq
2 option order YES bigint    
3 option code YES character varying 255  
4 option name NO text    
5 option other NO bool    
6 option question_id NO bigint    
7
option
color
YES
text


API Endpoints

New Endpoints

1. Create Mobile Assignment
{
  "name": "Kelewo Community Center Health Survey",
  "administrations": [321,398],
  "forms": [1,2,4],
}
{
  "id": 1,
  "passcode":"4dadjyla",
}
2. Get List of Mobile Assignment
{
  "current":1,
  "total":11,
  "total_page":2,
  "data": [{
    "id":1,
    "name": "Kelewo Community",
    "passcode": "3a45562",
    "forms": [{
      "id":1,
      "name": "Health Facilities"
    },{
      "id":2,
      "name": "CLTS",
    },{
      "id":3,
      "name": "Wash In Schools"
    }],
    "administrations": [{
      "id":765,
      "name": "Kelewo"
    }]
  }]
}

Token Modifications

In the updated RTMIS Mobile application, a significant change is being introduced to enhance security and access control. This change involves modifying the token generation process for Mobile Data Collector Assignments. Here's a detailed description of this update:

1. Context and Need for Change
2. Custom Token Generation for Enhanced Security
3. Example Custom Token Generation
import jwt
from rtmis.settings import SECRET_KEY

def generate_assignment_jwt(assignment_id, allowed_forms_ids, administration_ids, secret_key):
    # Custom claim for Mobile Assignment
    custom_claim = {
        "assignment_id": assignment_id,
        "allowed_endpoints": "api/v1/mobile/device/*",
        "forms": allowed_forms_ids,
        "administrations": administration_ids
    }
    # Payload of the JWT without an expiration time
    payload = {
        "assignment": custom_claim
    }
    # Generate JWT token
    token = jwt.encode(payload, SECRET_KEY, algorithm="HS256")
    return token

# Example usage
secret_key = "your_secret_key"  # Secure, unguessable string
assignment_id = "assignment_123"  # Unique identifier for the mobile assignment
allowed_forms_ids = [101, 102, 103]  # Example list of allowed form IDs
administration_ids = [201, 202]  # Example list of allowed administration IDs

token = generate_assignment_jwt(assignment_id, allowed_forms_ids, administration_ids, secret_key)
4. Token Payload
{
  "user_id": "<the user who create assignment>",
  "assignment_id": "<assignment_id>",
  "allowed_endpoints": "api/v1/mobile/device/*",
  "administration_ids": ["administration_id"],
  "allowed_forms_ids": ["form_id"],
  "exp": 1701468103,
  "iat": 1701424903,
  "jti": "923cfad9ff244e6897bfef2260dde4ee",
  ...other_stuff
}
5. Example Custom Authentication
from rest_framework.authentication import BaseAuthentication
from rest_framework import exceptions
import jwt

class MobileAppAuthentication(BaseAuthentication):
    def authenticate(self, request):
        # Retrieve the token from the request
        token = request.META.get('HTTP_AUTHORIZATION')

        if not token:
            return None  # Authentication did not succeed

        try:
            # Decode the token
            decoded_data = jwt.decode(token, 'your_secret_key', algorithms=["HS256"])
            
            # Check if the token has the required claims
            assignment_info = decoded_data.get('assignment')
            if not assignment_info:
                raise exceptions.AuthenticationFailed('Invalid token')

            # Add more checks here if needed (e.g., allowed_forms_ids, administration_ids)

            # You can return a custom user or any identifier here
            return (assignment_info, None)  # Authentication successful

        except jwt.ExpiredSignatureError:
            raise exceptions.AuthenticationFailed('Token expired')
        except jwt.DecodeError:
            raise exceptions.AuthenticationFailed('Token is invalid')
        except jwt.InvalidTokenError:
            raise exceptions.AuthenticationFailed('Invalid token')
6. Token Implementation Considerations

Endpoint Modifications

1. Get List of Assigned Forms

Unlike nmis-mobile, In the RTMIS Mobile application, the option to add users manually from the device will not be available (removed from the latest nmis-mobile). Consequently, when logging in, the response will now include information about the assignmentName. The remaining data will adhere to the existing structure of the previous Authentication API.

{"code": "<assignment_code_provided_by_admin>"}
{
  "name": "Kelewo Community",
  "syncToken": "Bearer eyjtoken",
  "formsUrl": [
    {
      "id": 519630048,
      "url": "/forms/519630048",
      "version": "1.0.0"
    },
    {
      "id": 533560002,
      "url": "/forms/533560002",
      "version": "1.0.0"
    },
    {
      "id": 563350033,
      "url": "/forms/563350033",
      "version": "1.0.0"
    },
    {
      "id": 567490004,
      "url": "/forms/567490004",
      "version": "1.0.0"
    },
    {
      "id": 603050002,
      "url": "/forms/603050002",
      "version": "1.0.0"
    }
  ],
  "certifications": []
}
2. Get Individual Form

The Individual Form will be the same as the previous response endpoint, with the only change being in the schema of the cascade-type question as defined in the Mobile Cascade Modification section. In the previous cascade-type question, the parent_id was an integer, acting as the initial level cascade filter, so the first level of the cascade showed the children of the parent_id. Now, we support multiple parent_ids, so the first level of the cascade represents the parent_ids themselves.

Initial Result:

"source": {
  "file": "cascade-296940912-v2.sqlite",
  "parent_id": 273
},

Final Result:

"source": {
  "file": "cascade-296940912-v2.sqlite",
  "parent_id": [273,234]
},

Form Updates

New Question Type

Data-point Question

This new question type is similar to an option-type question, but instead of custom options created by the user, the options will be populated from the "data-point-name" field in the data table (refer to: https://wiki.cloud.akvo.org/books/rtmis/page/low-level-design#bkmrk-database-overviews).

Requirements:

Parameters:

New Question Parameter

Display Only

The "Display Only" parameter is a helper that can be used to display a question for which the answer should not be sent to the server. The "Display Only" parameter is used to assist users in running data calculations, dependency population, or auto-answering for other questions.

Example use case:

Requirements:

Parameters:

Database Migration: Question

String Function

The latest version of the questionnaire introduces a new type of question, released in akvo-react-form v2.2.6, known as autofield. This question type necessitates a new parameter, with fn as the object name. To accommodate this, modifications to the database are required to store this new parameter effectively. 

Example use case:

{
  "id": 1701810579091,
  "name": "Outcome result - Functional toilet with privacy",
  "order": 4,
  "type": "autofield",
  "required": false,
  "meta": false,
  "fn": {
    "fnColor": {
      "G1": "#38A15A",
      "G0": "#DB3B3B"
    },
    "fnString": "function() {(#1699422286091.includes(\"G1\") && #1699423357200.includes(\"G1\") && #1699423571454.includes(\"G1\")) ? \"G1\" : \"G0\";}",
    "multiline": false
  }
}

Requirements:

Parameters:

Database Migration: Question

Meta UUID

The "Meta UUID" parameter is a useful utility that generates a universally unique identifier (UUID) for each data point, allowing you to easily track and distinguish individual records within your dataset. This unique identifier can be used as a parent datapoint when performing data monitoring, grade claims, and certification

Example use case:

{
  "id": 1702914803732,
  "order": 4,
  "name": "hh_code",
  "label": "Household Code",
  "type": "text",
  "required": true,
  "meta": false,
  "meta_uuid": true
}

Requirements:

Parameters:

Database Migration: Question

Hidden

Example use case:

{
  "id": 1716283800,
  "order": 34,
  "name": "community_outcomes_achieved",
  "label": "Have all of the community outcomes for this grade been achieved?",
  "type": "option",
  "required": true,
  "meta": false,
  "options": [
    {
      "order": 1,
      "label": "Yes",
      "value": "yes",
      "color": "green"
    },
    {
      "order": 2,
      "label": "No",
      "value": "no",
      "color": "red"
    }
  ],
  "hidden": {
    "submission_type": ["registration", "monitoring", "certification"]
  }
}

Parameters:

Database Migration: Question

Disabled

Example use case:

{
  "id": 1699354849382,
  "order": 2,
  "name": "hh_location",
  "label": "What is the location of the household?",
  "short_label": null,
  "type": "administration",
  "required": true,
  "meta": false,
  "fn": null,
  "disabled": {
    "submission_type": ["monitoring", "verification", "certification"]
  }
}

Parameters:

Database Migration: Question

Default value

Example use case:

{
  "id": 1699354220734,
  "order": 1,
  "name": "reg_or_update",
  "label": "New household registration or Monitoring update?",
  "type": "option",
  "required": true,
  "meta": false,
  "options": [
    {
      "order": 1,
      "label": "New",
      "value": "new"
    },
    {
      "order": 2,
      "label": "Update",
      "value": "update"
    }
  ],
  "default_value": {
    "submission_type": {
      "monitoring": "update",
      "registration": "new",
    }
  },
  "dependency": null,
  "fn": null
}

Parameters:

Database Migration: Question

Pre-filled

Example use case:

{
  "id": 1699417958748,
  "order": 1,
  "name": "resp_position",
  "label": "Household respondent position in household",
  "type": "option",
  "required": true,
  "meta": false,
  "options": [
    {
      "order": 1,
      "label": "Household head",
      "value": "hh_head"
    },
    {
      "order": 2,
      "label": "Spouse of household head",
      "value": "spouse_of_hh_head"
    },
    {
      "order": 3,
      "label": "Parent of household head",
      "value": "parent_of_hh_head"
    }
  ],
  "pre": {
    "reg_or_update": {
      "new": ["hh_head"]
    }
  }
}

Parameters:

Database Migration: Question

New Option Parameter

Option Color

Additionally, new functionalities have been introduced to enhance the visual appeal of options in option and multiple_option types of questions by incorporating color. To support this feature, a new column named color needs to be migrated into the option table.

Database Migration: Option

Front-end

User Stories

1. Adding an Assignment

Step 1: Access the "Mobile Data Collectors" Section

Step 2: Initiate Adding a Mobile Data Collector

Step 3: Fill in the Assignment Details Form

Step 4: Create the Assignment

Step 5: Receive the Assignment Pass-code

2. Submitting a Pending Batch of Data

Step 1: Data Collection by Mobile Data Collector/Enumerator

Step 2: Pending Submission Review by Data Entry User

Step 3: Batch Creation for Submission

Step 4: Data Submission

Step 5: Data Approval Process (Unchanged):

Mobile

User Stories

1. User Authentication

1.a. When there's no user in the users database:

1.b. When user is available in the users database:

2. Download Data-points (for monitoring)

Mobile Database Modifications

1. Form Database

Table name: forms

Column Name
Type
Example
id
INTEGER (PRIMARY KEY)
1
userId
INTEGER
1
formId
INTEGER
453743523
version
VARCHAR(255)
"1.0.1"
latest
TINYINT
1
name
VARCHAR(255) 'Household'
json
TEXT See: Example JSON Form
createdAt
DATETIME
new Date().toISOString()

Changes:

2. User Database

Table name: users

Column Name
Type
Example
id
INTEGER (PRIMARY KEY)
1
active TINYINT 1 (default: 0)
name
INTEGER
1
password
TEXT crypto
token
TEXT
token
certifications
TEXT
jsonb (administration)
lastSyncedAt DATETIME
new Date().toISOString()

Changes:

3. Form Submission / Datapoints Database

Table name: datapoints

Column Name
Type
Example
id
INTEGER (PRIMARY KEY)
1
form
INTEGER
1 (represent id in forms table, NOT formId)
user
INTEGER 1 (represent id in users table)
submitter
TEXT
'John'
name VARCHAR(255) 'John - St. Maria School - 0816735922'
submitted
TINYINT 1
duration
REAL
45.5 (in Minutes)
createdAt
DATETIME
new Date().toISOString()
submittedAt
DATETIME new Date().toISOString()
syncedAt
DATETIME
new Date().toISOString()
json
TEXT '{"question_id": "value"}'
submission_type
INTEGER
1 (represents the enum value of the submission type i.e. registration)
uuid
VARCHAR(191)
Crypto.randomUUID()

Changes:

Mobile Cascade Modification

The updated Mobile App Development introduces a significant change in handling cascade drop-down options, particularly in how multiple parent_ids are managed. This change affects the way options are displayed and selected in the cascade type of questions. Here's a detailed explanation of the new functionality:

Updated Functionality

Previous Functionality

Example:

"source": {
  "file": "cascade-296940912-v2.sqlite",
  "parent_id": 273
},

Updated Functionality with Multiple Parent Ids

Example:

"source": {
  "file": "cascade-296940912-v2.sqlite",
  "parent_id": [273]
},
Handling Different Scenarios
  1. Single parent_id in Array:

    • If the parent_id array contains only one administration_id, the first cascade option should automatically display the children of this single parent_id.
    • Example: "parent_id": [273] would directly show the children of 273 as the cascade options.
  2. Multiple parent_ids in Array:

    • If the parent_id array contains multiple administration_ids, the first cascade level will allow selection among these parent_ids.
    • Example: "parent_id": [273, 123] means the first cascade level will have options to select either 273 or 123.
  3. Single parent_id Without Children:

    • In a scenario where the parent_id array has one administration_id and this administration does not have any children, the app should automatically select this parent_id as the value by default.
    • Example: If 273 has no children, it becomes the default selected value

Data Synchronization

RTMIS - sycing - fix.png

To ensure that the mobile app is up-to-date with the latest information from the server, users can synchronize data points with a simple process. This ensures that all forms, data points, and master data are current and accurate.

Syncing Data Points

Step-by-Step Process:

  1. Initiate Sync: The mobile user can easily initiate the synchronization process by clicking the "Sync Datapoint" button on the Mobile app's Home screen.
  2. Request to Backend: When the user clicks "Sync Datapoint", the app sends a request to the backend server to retrieve three main categories of data:
    • Form Updates: Retrieves the current form assignments for the mobile user, including any updates indicated by form versions. This ensures the user is aware of any changes made to the forms they use.
    • Data-point List: Obtains the latest routine data based on the mobile user’s form assignments. This includes all relevant and recent data points necessary for the user's tasks.
    • Cascades: Retrieves the latest master data, such as administration details, organization information, and entity lists. This data is critical for aligning the app with real-world conditions and reflecting any additions, updates, or removals.
  3. Completion of Sync Process: Once the synchronization process is complete, the mobile user can access the updated data. They can then navigate to the desired form with all the latest information available.
Data-point List API

Here is the following JSON response from data-point list API:

{
  "current": 1,
  "total": 7,
  "total_page": 1,
  "data": [
    {
      "id": 11,
      "form_id": 1699353915355,
      "name": "DATA #1",
      "administration_id": 57443,
      "url": "https://rtmis.akvotest.org/b4b00592-b949-4424-b4ba-448a0d410ecf.json",
      "last_updated": "2024-05-30T04:31:58.539349Z"
    }
  ]
}

The url field in the data array will contain a URL to the JSON file that the mobile app will download as a data-point. This JSON URL is a direct link to a static file and is not generated by the back-end API, allowing for high traffic downloads.

Data-point JSON

After obtaining all the JSON URLs asynchronously, the mobile app will fetch the following JSON schema and store it in the mobile database:

{
  "id": 21,
  "datapoint_name": "Testing Data County",
  "submission_type": 1,
  "administration": 2,
  "uuid": "025b218b-d80a-454f-8d69-8eef812edc82",
  "geolocation": [
    6.2088,
    106.8456
  ],
  "answers": {
    "101": "Jane",
    "102": [
      "Male"
    ],
    "103": 31208200175,
    "104": 2,
    "105": [
      6.2088,
      106.8456
    ],
    "106": [
      "Parent",
      "Children"
    ],
    "109": 2.5
  }
}

By following this process, mobile users can maintain a high level of productivity and accuracy in their tasks, leveraging the most current data available from the server.

Monitoring Support

RTMIS - Monitoring support.png

In this version of RTMIS mobile, we introduce monitoring support for data-points. This monitoring is similar to a normal submission but includes previous answers. The form's shape will depend on the submission_type equal to 2 (enum value for monitoring) in the question-level object. Users will only answer questions that have a monitoring flag in the question. When synced to the server, it will be treated as the same data-point, except they will have the same meta UUID as their parent data-point.

Storing the Monitoring Data-point

The following table represents the schema for storing monitoring data-points:

Column Name
Type
Example
id
INTEGER (PRIMARY KEY)
1
formId
INTEGER
1 (represent id in forms table, NOT formId)
name VARCHAR(255) 'Testing Data County'
administrationId
TINYINT 1
uuid
VARCHAR(255)
025b218b-d80a-454f-8d69-8eef812edc82
syncedAt
DATETIME
new Date().toISOString()
json
TEXT '{"question_id": "value"}'

Grade Claim Support

RTMIS - grade claim FIX.png

The Grade Claim feature within the mobile app is designed to streamline the verification and certification of grades. Below is a detailed description of how this feature operates and its dependencies.

Overview

The Grade Claim feature has two submission types:

  1. Verification: Utilized through the Grade Claim form.
  2. Certification: Utilized through the Grade Certification form.

Example Form Configuration

Feature Dependencies and Behavior

How to Use the Grade Claim Feature

  1. Initiate Grade Claim:

    • Navigate to the Manage Form screen in the mobile app.
    • If verification or certification submission types are available, the respective buttons will be visible.
  2. Complete the Form:

    • Select the appropriate form (Grade Claim or Grade Certification) based on the submission type.
    • Fill out the necessary information and submit the form.
  3. No Approval Needed:

    • Once submitted, the forms do not require an approval process, allowing for immediate processing.
  4. Certification Assignments:

    • Ensure that certification assignments are managed via the dashboard by sub-county users to enable the certification feature on the mobile app.
  5. UUID Linking:

    • Ensure that each submission is linked with the parent data-point using the provided UUID to maintain data integrity.

By following this documentation, users can effectively utilize the Grade Claim feature, ensuring a smooth and efficient workflow for verifying and certifying grades.

Formatting the JSON File

We detailed the process of formatting a JSON file to create a customized questionnaire form for the RTMIS system. The RTMIS system leverages the Akvo Form Service for generating initial JSON form structures. We explored the basic structure and components of the form JSON, as documented in the Akvo React Form's README.

Additionally, we introduced specific customizations required for RTMIS, including the addition of submission_types at the form level and three new parameters at the question level: default_value, disabled, and hidden. These custom parameters are defined as objects based on the submission_type, which is specified as an enumeration.

We provided a detailed example of the JSON structure incorporating these customizations and outlined the manual steps needed to add these custom parameters after generating the initial JSON form. By following this process, users can effectively format their JSON files to meet the requirements of RTMIS customized questionnaire forms.

Overview

The RTMIS system uses a JSON file to build a questionnaire form. This JSON file can be generated using an internal library called Akvo Form Service. For more information and to access the editor, visit the following link: https://form-service.akvotest.org/forms

In general, all components and formats in the form JSON are documented in the Akvo React Form's README file. You can find the documentation here: https://github.com/akvo/akvo-react-form/blob/main/README.md

However, for this project, we have added customizations at the form-level definition and at the question level. These customizations include the addition of submission_types at the form level, and three additional parameters at the question level: `default_value`, `disabled`, and `hidden`. These parameters are defined as objects and depend on the `submission_type`. The `submission_type` itself is an enum value and is defined as a constant. You can view the definition here: https://github.com/akvo/rtmis/blob/main/backend/api/v1/v1_forms/constants.py#L37-L48.

All the customizations will be added manually after the JSON form is generated with the Akvo Form Service.

Generate the JSON Form

Create a New Form

Go to Akvo Form Service: Open your browser and navigate to https://form-service.akvotest.org/

Access the Forms Menu: Click on the "Forms" menu and Click the "New" button to create a new form.

AFS - step 1.png

Update Title and Description: Modify the default title and description as needed.

AFS - step 2.png

Edit Default Group Question: Click the gear icon on the right side to update the default group question. Once done, click the gear icon again to exit edit mode.

AFS - step 3.png

Edit questions: Click the pencil icon on the group question to edit or add questions.

AFS - step 4.png

Edit questions: For the default question, click the pencil icon and update the question type (e.g., option), fill in all necessary options, and click the pencil icon again to collapse the question.

AFS - step 5.1.png

Add Questions: To add a new question, click "Add New Question" at the bottom to insert a new question after the current one, or click "Add New Question" at the top to insert before.

AFS - step 5.2.png

Preview and Save: Go to the "Preview" tab to review and evaluate the form settings. If everything is correct, click the "Save" button to store the current version of the form.

AFS - step 6 - finish.png

Download Akvo Form Service to RTMIS

Customization Details

Form Level Customization

At the form level, we introduce a new parameter: `submission_types`. This parameter specifies the different types of submissions allowed for the form. The `submission_types` parameter is defined as an enumeration, providing a set of predefined submission types that the form can handle.

id
Submission Type
Description
1
registration
Utilized through the Registration form
2
monitoring
Utilized through the Monitoring form
3
verification
Utilized through the Grade Claim form
4
certification
Utilized through the Grade Certification form


Question Level Customization

At the question level, we introduce three new parameters:

1. default_value: Specifies the default value for the question. This value will be pre-filled when the form is loaded.

2. disabled: A boolean parameter that indicates whether the question should be disabled (i.e., not editable) when the form is displayed.

3. hidden: A boolean parameter that indicates whether the question should be hidden from view when the form is displayed.

These parameters are defined as objects and their values depend on the `submission_type`.

Example JSON Structure

Here is an example structure of the JSON file with the added customizations:

{
  "id": 123456,
  "form": "School WASH Form",
  "description": "School WASH",
  "defaultLanguage": "en",
  "languages": ["en"],
  "version": 1,
  "type": 1,
  "translations": null,
  "submission_types": [
    "registration",
    "monitoring",
    "verification",
    "certification"
  ],
  "question_groups": [
    {
      "id": 1699354006534,
      "order": 1,
      "name": "school_location_group_question",
      "label": "School: Location",
      "repeatable": false,
      "translations": null,
      "questions": [
        {
          "id": 1699354006535,
          "order": 1,
          "name": "new_school_registration_monitoring_update",
          "label": "New school registration or monitoring update?",
          "short_label": null,
          "type": "option",
          "tooltip": {
            "text": "Entry of school data in RTMIS (first time) or update of monitoring data (existing school)"
          },
          "required": true,
          "meta": false,
          "options": [
            {
              "order": 1,
              "label": "New",
              "value": "new"
            },
            {
              "order": 2,
              "label": "Update",
              "value": "update"
            },
            {
              "order": 3,
              "label": "Verification",
              "value": "verification"
            },
            {
              "order": 4,
              "label": "Certification",
              "value": "certification"
            }
          ],
          "default_value": {
            "submission_type": {
              "monitoring": "update",
              "registration": "new",
              "verification": "verification",
              "certification": "certification"
            }
          }
        },
        {
          "id": 1699951210638,
          "order": 2,
          "name": "school_location",
          "label": "What is the location of the school?",
          "short_label": null,
          "type": "administration",
          "tooltip": {
            "text": "This question contains a list of possible school locations, starting with the government area or district, down to the community."
          },
          "required": true,
          "meta": true,
          "disabled": {
            "submission_type": ["monitoring", "verification", "certification"]
          }
        },
        {
          "id": 1716283778,
          "order": 33,
          "name": "schools_achieved_required_outcomes",
          "label": "Have 100% of school achieved the required outcomes for this grade?",
          "short_label": null,
          "type": "option",
          "required": true,
          "meta": false,
          "options": [
            {
              "order": 1,
              "label": "Yes",
              "value": "yes",
              "color": "green"
            },
            {
              "order": 2,
              "label": "No",
              "value": "no",
              "color": "red"
            }
          ],
          "hidden": {
            "submission_type": ["registration", "monitoring", "certification"]
          }
        }
      ]
    }
  ]
}

By following these steps, you can successfully format the JSON file to work with RTMIS as a customized questionnaire form.

RTMIS Self-Host Installation Guide

Installation Guide

Below step is for self-host or on-prem installation process. Please follow Developer-Guide to setup the development environement.

Infrastructure Diagram

System Requirements

Application Server

Database Server

Prerequisite

Preparation

Note: The following guide is an example installation on Ubuntu and Debian based systems. You need the below depedencies installed both on Application Server and Database Server.

Install Docker Engine

  1. Install Docker engine:

    sudo curl -L https://get.docker.com | sudo sh
    
  2. Manage Docker for a non-root user.

    sudo usermod -aG docker $USER
    exit
    
  3. The above exit command will close your terminal session. Please log back in to the previous user before continuing to the next steps.

Install Git Version Control

The RTMIS uses git as version control. Therefore it is better to install git to make it easier to retrieve updates instead download the repository zip.

sudo apt install git

Install Database Server

Execute the commands below on the server allocated for the database server.

Clone the Repository

cd ~
mkdir src
cd src
git clone https://github.com/unicefkenya/rtmis.git .

Environment Variable Setup

Install text editor to be able to edit .env file

sudo apt install nano

or

sudo apt install vim

Go to the repository directory, then edit the environment

cd deploy
cp db.env.template db.env
vim db.env

Example Environment:

POSTGRES_PASSWORD=<<your postgres user's password>>

# Ensure the values below match those in the app.env file in the application.
DB_USER=<<your rtmis db user>>
DB_PASSWORD=<<your postgresql password>>
DB_SCHEMA=<<your rtmis schema name>>

Run the Database Server

docker compose -f docker-compose.db.yml up -d 

Install Application Server

Execute the commands below on the server allocated for the application server.

Clone the Repository

cd ~
mkdir src
cd src
git clone https://github.com/unicefkenya/rtmis.git .

Environment Variable Setup

Install text editor to be able to edit .env file

sudo apt install nano

or

sudo apt install vim

Go to the repository directory, then edit the environment

cd deploy
cp app.env.template app.env
vim app.env

Example environment variables:

DB_HOST=<<your postgresql ip>>
DB_PASSWORD=<<your postgresql password>>
DB_SCHEMA=<<your rtmis schema name>>
DB_USER=<<your rtmis db user>>
POSTGRES_PASSWORD=<<your postgres user's password>>
DEBUG="False"
DJANGO_SECRET=<<your Django secret key>>
MAILJET_APIKEY=<<your mailjet api key from mailjet portal>>
MAILJET_SECRET=<<your mailjet api secret from mailjet portal>>
WEBDOMAIN=<<your exposed domain url, example : https://rtmis.akvo.org>>
APK_UPLOAD_SECRET=<<your apk upload secret>>
STORAGE_PATH="./storage"
SENTRY_DSN="<<your sentry DSN for BACKEND>>"
TRAEFIK_CERTIFICATESRESOLVERS_MYRESOLVER_ACME_EMAIL=<<administrator email for Letsencrypt registration>>

Build the Documentation

CI_COMMIT=initial docker compose -f docker-compose.documentation-build.yml up

Build the Frontend

CI_COMMIT=initial docker compose -f docker-compose.frontend-build.yml up

Run the Application

docker compose -f docker-compose.app.yml up -d --build

Data Seeding for Initial Data

Once the app is started, we need to populate the database with the initial data set. The required initial dataset are:

  1. Seed administration
  2. Seed super admin
  3. Seed form
  4. Seed organization
docker compose -f docker-compose.app.yml exec backend ./seeder.prod.sh

Cheatsheets

Manual Update the Application

Execute the command below on the application server to update the application with the latest codes and re-deploy to the application server:

$ cd deploy/
$ ./manual_update.sh

Restart the Application

Execute the command below on the application server to restart the application container:

$ cd deploy/
$ ./restart_app.sh

Clear Nginx Cache

Execute the command below on the application server:

$ docker compose -f docker-compose.app.yml exec -u root frontend sh -c "rm -rf /var/tmp/cache/*"

Remove Form

Execute the command below on the application server.

Login to container:

$ docker compose -f docker-compose.app.yml exec backend sh

After logged in, execute belo commands:

$ python manage.py shell
> from api.v1.v1_forms.models import Forms
> f = Forms.objects.filter(name="Short HH").first()
> f.delete()

Exit from the container and execute below command: Execute the command below on the application server

$ docker compose -f docker-compose.app.yml exec -u root frontend sh -c "rm -rf /var/tmp/cache/*"

Execute the Cronjob Manually

Execute the command below on the application server to triggering the cronjob manually.

$ docker compose -f docker-compose.app.yml exec backend-cron ./job.sh

Generate Django Secret

$ python3 -c 'import secrets; print(secrets.token_hex(60))'

     

Sentry - Register Account and Setup Project




Expo.dev - Register Account and Setup Project





    1. Display Name: Enter a human-readable name for your project (e.g., "Mobile App"). This is how the project will be displayed within the Expo platform.

    2. Slug: Provide a unique, URL-friendly name for your project (e.g., "mobile-app"). This slug will be used in the project's URL, so it should be unique across your account.


After creating your access token, it's vital to secure it properly. In the Access Tokens section, you will see your newly generated token listed, along with a warning indicating that you should copy and store it in a safe place.

image.png