An online tool based on the Internet of Things and intelligent blockchain technology for data privacy and security in rural and agricultural development
Multi-tiered BCT for Agri-IoT
Figure 1 shows a multi-tiered system that uses a permissioned BCT to secure and optimize Agri-IoT data interactions. This system has edge, fog, and cloud tiers. ‘’Data Handlers’ are functional components of each tier:

Proposed Agri-IoT data interaction workflow.
-
LADH: Situated at the edge tier, the LADH communicates directly with IoT devices, which monitor diverse agricultural parameters. The LADH’s primary role is to collect and forward this raw data for further processing.
-
a)
Soil Sensors: Measure soil moisture, pH, temperature, and nutrients for informed farming decisions.
-
b)
Crop Health Monitors: Detect crop diseases and pests using imaging and sensing technologies.
-
c)
Environmental Sensors: Monitor micro-climatic conditions like humidity and rainfall.
-
d)
SF Device: IoT-enabled tractors and drones for efficient, data-driven farming.
-
e)
Livestock Monitors: Track the health and movement of livestock.
-
PAFDH: As an intermediary between the edge and fog tier, the PAFDH connects multiple LADHs in edge tier nodes that relay information from the grassroots-level IoT devices to the more advanced data handling tiers. Its core function revolves around data transmission but also plays a role in preliminary data processing and filtering.
-
CAFDH: The fog tier’s gateway to the cloud, CAFDH, is embedded with core fog PAFDH nodes and processes robust data. The CAFDH standardizes incoming data, whereas the ECC algorithm encrypts and decrypts it, ensuring data confidentiality and security.
-
CADH: The cloud-based CADH stores and manages BC data. After data is processed and in its required format, this layer consolidates, archives, and potentially analyses IDS using FS using the COA for FS and the proposed QNN + BO for classification.
Each DH has a block header, MAC policy header guiding transactional directives, and transaction logs. CIA security is built into this architecture. The Agri-IoT-specific design prioritizes scalability, decentralization, and strong defences against vulnerabilities, especially single points of error. Because rural IoT devices have computational constraints and complex cryptographic processes are complicated, each DH has ‘miners’. These units manage communication, device integration, and transactions.
Additionally, each DH features Intelligent Data Assistants (IDA) skilled in signature verification, authentication, authorization, and cryptographic measures. However, to maintain optimal operational efficiency at the grassroots level and recognize the inherent computational limitations of the LADH devices, the LADH judiciously omits the encryption and decryption components. Instead, these cryptographic measures are relegated to the higher tiers. Regarding consensus-building, the model avoids traditional public BC, which could expose sensitive agricultural data or be prone to security attacks. Instead, the system uses permissioned BCT, creating an environment where only verified entities can participate in consensus derivation and mining, ensuring privacy and security for valuable agricultural data. In the Agri-IoT, communications between IoT devices, central fog nodes, and the overarching cloud infrastructure are termed transactions.
Transactions possess an inherent change, shaped by their distinct objectives, and can be considered as follows:
-
Access Transactions: ‘Data Handlers’ primarily initiate data retrieval communications, providing read-only permissions to ensure data integrity and viewing rights.
-
Update Transactions: Devices use these transactions to modify existing data, granting read and write permissions to alter stored data.
-
Add Transactions: ‘Data Handlers’ use these transactions to integrate new IoT devices or nodes into the system, providing read-and-write capabilities for seamless onboarding.
-
Remove Transactions: ‘Data Handlers’ triggers delisting or activating IoT devices to ensure optimal resource allocation and system efficiency.
-
Monitor Transactions: ‘Data Handlers’ use these devices to monitor IoT devices’ operational status and data flow, providing real-time visions and enabling proactive responses.
In this structure, each transaction type is pivotal in maintaining the security and efficiency of the Agri-IoT, ensuring the model remains adaptive to the dynamic demands of modern agriculture.
MAC policy in Agri-IoT
As agricultural digitization evolves, the importance of high-security measures like Mandatory Access Control (MAC) cannot be understated. MAC’s centralized method stands out as only ‘Data Handlers’ can access and adjust the access control protocol, logically reducing probable vulnerabilities.
-
LADH: At this grassroots level, LADH primarily enforces MAC to authenticate IoT devices. This ensures only approved devices can interact with the system, securing data at the initial collection point.
-
PAFDH: PAFDH uses MAC to regulate data flow between IoT devices and upper-tier nodes. Validating data communications at this stage ensures that only authenticated data ascends the hierarchy.
-
CAFDH: CAFDH, equipped with advanced data processing, relies on MAC to validate and match the incoming data for centralized analysis. This is pivotal in maintaining the integrity of aggregated data before it is relayed to the cloud.
-
CADH: CADH uses MAC to manage access to consolidated data archives, ensuring the preservation of sensitive agricultural visions. MAC evaluates user security authorization and entity resource classification, promoting transparency and traceability. Access is determined by comparing user security clearance levels with resource classifications. MAC prioritizes data confidentiality, integrity, and availability, as timely data can impact SF products. It restricts interactions to matched security clearances, guarding against probable attacks. This method promotes transparency and traceability in Agri-IoT, ensuring timely data is crucial for SF outcomes.
Data handlers in Agri-IoT
The BC model is structured around four tiers of ‘Data Handlers’: LADH, PAFDH, CAFDH, and CADH. ‘Data Handlers’ manage device access control policies, helping communication and data management. Each block in the BC contains two headers, representing the block and agreeing the policy. The MAC policy is integrated into every communication, ensuring consistent security measures. A designated ‘miner’ manages system tasks, including authentication, authorization, and auditing. When a new block is added, ‘Data Handlers’ connect it to the preceding block, transferring the policy from the earlier block header to the new one. ECC is employed for secure data transfer from CAFDH to CADH, ensuring confidentiality and tamper-proofing of data transmitted to cloud storage.
Intelligent data assistant (IDA)
The framework utilizes an IDA, a software-based component recognized for its versatility and efficiency. It operates transparently, allowing agile communications with other devices and providing a unified user experience. IDA benefits IoT devices with computational constraints, reducing operational costs and resource consumption.
-
Key Generation Assistant (KGA): The KGA is a tool within the CADH that generates ECC for PuK-PrK pairs for nodes and users using the ECC. The PuK is shared securely, while the PrK is stored securely within the CADH for confidentiality and unauthorized access.
-
Authentication Assistant (AuthA): The AuthA, situated in LADH, PAFDH, CAFDH, and CADH layers, acts as a gateway guardian, scrutinizing every transactional request and ensuring only legitimate requests match the system’s security requirements, particularly in digital agricultural settings with several devices and nodes.
-
Authorization Assistant (AuthzA): The AuthzA, a commonly used system, combines authentication and authorization to ensure data system security. It uses MAC to analyze requests and checks for proper authorization, significantly reducing the risk of unauthorized data attacks or modifications.
-
Crypto Assistant (CryptA): Located within the CAFDH, the CryptA is vital to the secure functioning of the Agri-IoT. Equipped with the EC3, it perfectly manages encryption and decryption. When data is destined for the BC, the CryptA tasks its encryption module. The CryptA uses EC3 to process raw data using the recipient’s PuK, resulting in securely encrypted data. This cryptographic revolution ensures data confidentiality. When authorized entities demand data, the decryption module retrieves encrypted data and uses the EC3 with the requester’s PrK to restore it to its innovative form. This ensures that only authorized individuals can access and decrypt sensitive agricultural data, maintaining the integrity and privacy of the entire Agri-IoT.
-
Signature Verification Assistant (SigVerA): The SigVerA is a key component within the CADH, ensuring transaction integrity. It uses the ECC-PuK of the sender, the sent data, and its related signature as inputs. The SigVerA verifies if the signature matches the transmitted data using the EC3, ensuring transaction integrity and authenticating the sender’s identity. The communication steps between the assistants are proved in a sequence diagram (Fig. 2).

Communication sequence diagram among the assistants.
EC3 algorithm
Advanced computational capabilities and cryptanalysis methods enhance ECC, a key component of modern cryptographic methods, leading to the development of chaotic maps. These deterministic, unpredictable systems can introduce unpredicted layers, potentially thwarting traditional attacks and enhancing brute-force attacks. This hybrid encryption challenges conventional cryptanalysis, but cryptographic authorities must measure its strength and applicability.

The integration of BC in this multi-tiered model is pivotal for several reasons directly linked to the unique features of this proposed QNN + BO:
-
1.
Decentralized and Secure Data Management: The BCT reinforces the data handling structure of proposed Data Handlers across LADH, PAFDH, CAFDH, and CADH layers. This decentralization ensures a more secure and resilient data management, as each BC block contains dual headers for data and policy, enhancing data integrity and security.
-
2.
Enhanced Data Security with ECC-Chaotic Map: Integrating ECC with Chaotic Maps within the proposed Crypto Assistant (CryptA) component provides a stimulated encryption mechanism. This unique combination ensures the confidentiality of data as it transitions across the tiers, particularly in the critical phase from CAFDH to CADH.
-
3.
Robust Access Control via MAC Policy: The MAC policy, integrated into every transaction, reinforces the proposed QNN + BO’s security. The system effectively safeguards against unauthorized access and data attacks by controlling access permissions across all transactions.
-
4.
Effective Key Management with IDA: In the proposed model, IDAs like Authentication Assistant (AuthA) leverage BCT for secure key management and authentication processes. This results in a closed network where only authenticated devices and users can interact.
-
5.
Secure Transactional Integrity: The SigVerA within the CADH uses the ECC-PuK to verify transactional integrity. This ensures that data and transactions recorded on the BCT are authentic and unaltered, maintaining the trustworthiness of the entire Agri-IoT.
-
6.
Operational Efficiency and Cost Reduction: By automating key processes and reducing the reliance on centralized systems, BCT in the proposed QNN + BO contributes to operational efficiency and cost reduction, particularly in managing the large-scale data generated by IoT devices in agriculture.
COA for feature selection
The metaheuristic algorithm uses the North American Canis latrans species as a model to study coyotes’ family structure and environmental acclimatization. The COA maintains equilibrium between the exploration and exploitation phases, with each Coyote’s Social State (CSS) acting as a binary vector. The algorithm initializes with a randomly produced coyote population, ensuring every CSS has at least one ‘1’ and no feature subset remains void. This initialization is assumed by Eq. (1).
$$\:SO \alpha \right\rangle _\left^\left=\left_{j}+{r}_{j}\times\:\left({u}_{j}-{l}_{j}\right)$$
(1)
where,
The fitness of each feature subset is determined by the performance of a Machine Learning (ML) trained on the Feature Selection (FS) as Eq. (2).
$$\:{\text{f}\text{i}\text{t}}_{C}^{p,t}=\text{A}\text{c}\text{c}\text{u}\text{r}\text{a}\text{c}\text{y}\left({M}_{C}^{P,t}\right)$$
(2)
where,
-
M→symbolizes the model.
-
C→the coyote’s index.
-
\(\:\text{t}\)→marks the iteration.
The algorithm employs dynamism by allowing coyotes to infrequently change packs based on a specific probability, Eq. (3).
$$\:{P}_{e}=0.005\times\:{N}_{C}^{2}$$
(3)
According to Eq. (4), the alpha (alp) coyote performs best by balancing model performance with the fewest features.
$$\:{\text{A}\text{l}\text{p}}^{P,t}={\text{A}\text{r}\text{g}\_\text{M}\text{i}\text{n}}_{C=\text{1,2},\dots\:,N}\left\{Fit\left(SO{C}_{C}^{P,t}\right)+\lambda\:\times\:H{W}_{C}^{P,t}\right\}$$
(4)
As Eq. (5) iterates, each coyote’s age increases from ‘0’.
$$\:{\text{A}\text{g}\text{e}}_{C}^{P,t+1}={\text{A}\text{g}\text{e}}_{C}^{P,t}+1$$
(5)
COA is a system where a coyote dies at a certain age and is replaced by a new one, signalling its “birth” with a random feature set, ensuring new standpoints and a collective cultural memory across numerous coyotes.
$$Culture_{j}^{{(P,t)}} = AvgTopK$$
(6)
Informed by the alpha and collective memory, each coyote updates its feature set as:
$$\:SO{C}_{C}^{P,t,\text{\:new\:}}=SO{C}_{C}^{P,t,\text{\:old\:}}+{r}_{1}\times\:{\delta\:}_{1}+{r}_{2}\times\:{\delta\:}_{2}$$
(7)
The algorithm iterates until the fitness function improves minimally; then, the alpha coyote’s feature subset is considered optimal, ensuring high performance with minimal features.

Implementation of QNN
The QNN discussed is an advancement of this work, integrating it with BO. Their work introduced the quantum perceptron as a primary unit of QNN. This perception acts as a unitary operator with \(\:{\prime\:}m{\prime\:}\) input qubits and \(\:{\prime\:}n{\prime\:}\) output qubits and is defined by \(\:{\left({2}^{(m+n)}\right)}^{2}-1\) parameters. Inputs are initiated in a mixed state, denoted as \(\:{\rho\:}^{in}\), and outputs begin in a standard state \({\left| {0 \ldots 0} \right\rangle ^{out~}}\). Quantum perceptrons take \(\:{\prime\:}m{\prime\:}\) inputs to produce a single output, so they operate on \(\:m+1\) qubits. The QNN is visualized as multiple layers of these perceptrons, with \(\:{\prime\:}L{\prime\:}\) hidden layers. The input state \(\:{\rho\:}^{\text{in\:}}\) travels by these layers to result in an output state \(\:{\rho\:}^{\text{out\:}}\), Eq. (8).
$${\rho ^{out~}}=T{r_{in,~hid~}}\left( {U\left( {{\rho ^{in~}} \otimes {{\left| {0 \ldots 0} \right\rangle }_{hid,~out~}}\left\langle {0 \ldots 0} \right|} \right){U^\dag }} \right)$$
(8)
The QNN circuit, consisting of individual layer unitaries, is represented by U, and the order of application of these perceptions is crucial due to their non-commutative nature. The QNN design supports universal quantum computation, even with perceptrons with two inputs and one output. A QNN with 4-level qudit perceptrons maintains this capability. Non-commuting qubit-based ones propose real-world benefits, allowing for handling any quantum channel on input qudits and maximizing perceptron versatility. A key feature of the QNN is its output, structured as a series of positive maps between layers, termed \(\:{E}^{l}\), closing in the output state \(\:{\rho\:}^{\text{out\:}}\) which is measured as Eq. (9).
$$\:{\rho\:}^{\text{out\:}}={E}^{\text{out\:}}\left({E}^{L}\left(\dots\:{E}^{2}\left({E}^{1}\left({\rho\:}^{\text{in\:}}\right)\right)\dots\:\right)\right)$$
(9)
where, \(\:{E}^{l}\left({X}^{(l-1)}\right)\) is given by Eq. (10)
$$E^{l} \left( {X^{{\left( {l – 1} \right)}} } \right) = Tr_{{l – 1}} \left( {\prod\limits_{{j = 1}}^{{m_{l} }} {U_{j}^{{l\dag }} \left( {\prod\limits_{{j – m_{l} }}^{1} {U_{j}^{l} \left( {X^{{\left( {l – 1} \right)}} \otimes \left| {0 \ldots 0} \right\rangle _{l} \left\langle {0 \ldots 0} \right|} \right)} } \right)} } \right)$$
(10)
In this, \(\:{U}_{j}^{l}\) signifies the jth perceptron between layers \(\:\{l-1\), \(\:l\), \(\:{m}_{l}\}\) counts these perceptions. The study demonstrates the progression of data from input to output, creating a quantum Feed-Forward Neural Network (FFNN) and laying the basis for the quantum counterpart of the BO, with a specific quantum perceptron selected for the QNN. In this different model, the unitary \(\:{\prime\:}U{\prime\:}\) by Eq. (11).
$$U=\sum\nolimits_{\alpha } {\left| \alpha \right\rangle } \left\langle \alpha \right| \otimes U(\alpha )$$
(11)
here
Using this in the output state equation, the resulting output state operates through a measure-and-prepare channel, as Eq. (12).
$$\rho ^{{out~}} = \mathop \sum \limits_{\alpha } \left\langle \alpha \right|\rho ^{{in}} \left| \alpha \right\rangle U\left( \alpha \right)\left| 0 \right\rangle \left\langle 0 \right|U\left( \alpha \right)\dag$$
(12)
Such a channel does not have a positive quantum channel capacity, restricting its general quantum computation capabilities.
Training QNN
The QNN’s next phase involves specifying the learning task and translating classical learning scenarios into the quantum domain. Two methods are used: replacing classical samples with distinct quantum states or equating the distribution directly to a specific quantum state. In the selected approach, \(\:{\prime\:}\text{N}{\prime\:}\) samples can be viewed as \(\:{\prime\:}\text{N}{\prime\:}\) identical quantum states, emphasizing continuous access to training data, including pairs \(\left( {\left| {\phi _{x}^{{in}}} \right\rangle ,\left| {\phi _{x}^{{out~}}} \right\rangle } \right)\), where the quantum states might be unknown.
Furthermore, the number of copies required for each training iteration multiplies with the number of neurons and scales linearly with the network’s total parameters, represented as \(\:{n}_{\text{proj\:}}\times\:{n}_{\text{params\:}}\). Here, \(\:{n}_{\text{proj\:}}\) stems from recurrent measurements to counteract prediction noise and \(\:{n}_{\text{params\:}}\) is the network’s total parameter count, determined by \(\:{\sum\:}_{l=1}^{L+1}\:\left({4}^{\left({m}_{l-1}+1\right)}-1\right)\times\:{m}_{l}\). In this, \(\:{m}_{l}\) is the number of perceptrons between layers \(\:\{l-1\), \(\:l\}\) with the ‘-1’ accounting for the unitaries’ complete phase irrelevance. Larger models may only be feasible with sparsely connected networks in the short term unless simple training data is produced. Implementing output states using thermal equilibrium via environmental interaction may remain feasible. The subsequent discussions will focus on cases where \(\left| {\phi _{x}^{{out~}}} \right\rangle =V\left| {\phi _{x}^{{in~}}} \right\rangle\), with \(\:{\prime\:}\text{V}{\prime\:}\) as an unknown unitary process. This scenario arises when a device’s operation is not fully trusted but can reliably initiate and manipulate numerous initial states.
To measure the QNN’s learning efficacy, a cost function measures the closeness between the network’s output state \(\:{\rho\:}_{x}^{\text{out\:}}\) and the actual state \(\left| {\phi _{x}^{{out~}}} \right\rangle\). Fidelity is used as this metric, representing the proximity of pure quantum states. The cost function (C)is expressed as Eq. (13).
$$C=\frac{1}{N}\sum\nolimits_{{x=1}}^{N} {\left\langle {\phi _{x}^{{out~}}} \right|} \rho _{x}^{{out~}}\left| {\phi _{x}^{{out~}}} \right\rangle$$
(13)
Like classical deep networks’ risk function, the function is efficiently simulated, with mixed state fidelity Eq. (14) used for non-pure states in training data.
$$\:F(\rho\:,\sigma\:)=\:{\left[tr\sqrt{\left({\rho\:}^{1/2}\sigma\:{\rho\:}^{1/2}\right)}\right]}^{2}$$
(14)
is applied and applicable when training the network to represent a quantum channel.
The value of cost function ‘C’ ranges from \(\:\text{O}\) (worst performance) to 1 (best performance). Training the QNN entails optimizing ‘C’.
In every training step, quantum perceptron unitary are adjusted using \(U \to {e^{i\varepsilon K}}U\), where \(\:{\prime\:}\text{K}{\prime\:}\) contains parameters for the unitary operation, and \(‘\varepsilon ‘\) is a set step size.
The matrices Kaim to enhance the cost function’s progress rate. The change in \(\:{\prime\:}\text{C}{\prime\:}\) by Eq. (15).
$$\Delta C = \frac{\varepsilon }{N}\mathop \sum \limits_{{x = 1}}^{N} \mathop \sum \limits_{{l = 1}}^{{L + 1}} tr\left( {\sigma _{x}^{1} \Delta E^{l} \left( {\rho _{x}^{{l – 1}} } \right)} \right)$$
(15)
With \(\:{\rho\:}_{x}^{l}\) and \(\:{\sigma\:}_{x}^{l}\) representing specific sequences of \(\:\text{E}\) and \(\:\text{F}\) operations and where \(\:F\left(X\right)\) is the adjoint channel of the Completely Positive (CP) map \(\:E\left(X\right)\).
This model provides an equation to pinpoint the \(\:{\prime\:}\text{K}{\prime\:}\) matrices. The network’s layer-based structure simplifies the calculation of \(\:{K}_{j}^{l}\) for each quantum perceptron. To find this matrix, one needs the output state \(\:{\rho\:}^{(l-1)}\) from the prior layer and the \(\:{\sigma\:}^{l}\) state for the subsequent layer. The first state is derived by applying layer channels \(\:{E}^{1},{E}^{2},\dots\:,{E}^{(l-1)}\) to the input, and the second by applying adjoint channels to the target output up to the current layer.
Bayesian optimization
BO is a probabilistic-based optimization model aimed at optimizing expensive black-box functions, mainly when the objective function is costly, and evaluations are infrequent. It uses a Gaussian Process (GP) to represent the unknown function, described as Eq. (16).
$$\:f\left(x\right)\sim\:\mathcal{G}\mathcal{P}\left(m\left(x\right),k\left(x,{x}^{{\prime\:}}\right)\right)$$
(16)
where,
-
\(\:f\left(x\right)\)→the function to be optimized.
-
\(\:m\left(x\right)\)→the mean function.
-
\(\:k\left(x,{x}^{{\prime\:}}\right)\)→the covariance function or kernel.
The optimal kernel, such as the RBF or Matérn, can significantly impact the model’s performance.
The objective is to find Eq. (17).
$$\:{x}^{\text{*}}=\text{A}\text{r}\text{g}\text{M}\text{a}\text{x}\:f\left(x\right)$$
(17)
but since \(\:f\left(x\right)\) is cost-effective to evaluate, utilize the GP model to approximate it.
The model employs an attainment function, the predicted score, to determine the next sampling point based on the expected increase in the best-known function value at a new end, ‘x’. Equation (18).
$$\:EI\left(x\right)=\mathbb{E}\left[\text{M}\text{a}\text{x}\left(f\left({x}^{\text{*}}\right)-f\left(x\right),0\right)\right]$$
(18)
where,
The real function is evaluated at the acquisition function’s recommended point, new observations are added to the dataset, the GP is updated, and the process is repeated, as described in the algorithm.

The process ends after a set number of steps or when the predicted score falls below a threshold, indicating that further sampling is unlikely to succeed.
QNN + BO in Agri-IoT
The QNN + BO is pivotal in this proposed Agri-IoT solution, particularly in data analysis and decision-making processes. Here is a summary of how this innovative algorithm functions and contributes to the model:
-
1.
QNN: QNNs process data using quantum bits in multiple states. Multiple layers of QNN handle large Agri-IoT datasets in this model, like soil parameters, crop health data, and environmental factors.
-
2.
BO: This optimizes the QNN’s parameters, enhancing its learning efficiency. It uses a probabilistic model, typically a Gaussian Process, to map the relationship between the QNN parameters and their performance metrics. BO iteratively updates this model to find the optimal parameter settings, balancing investigating new parameters and exploiting known good ones. This approach is advantageous in scenarios where the parameter space is vast and complex, as in the case of QNN.
-
3.
Role in Agri-IoT: Within our Agri-IoT, the QNN + BO is primarily responsible for analyzing the data collected from various IoT devices. It processes this data to extract meaningful patterns and insights critical for making informed agricultural decisions. For instance, it can predict crop yield based on environmental data or identify early signs of disease in crops. The QNN’s quantum capabilities allow for handling large-scale data more efficiently than traditional Artificial Intelligence (AI), while BO ensures that the QNN learns effectively from this data.
Integrating BO into the QNN is a critical innovation that addresses some of the model’s limitations. A notable constraint in the assumed QNN is the exponential scaling of parameter space due to the unitary operators’ complexity, particularly when dealing with multiple layers and perceptrons. The parameter space for each layer unitary \(\:{U}_{l}\left({\theta\:}_{l}\right)\) and transition maps \(\:{T}_{l}\left({\alpha\:}_{l}\right)\) is colossal, making classical optimization techniques less efficient or infeasible for high-dimensional parameter optimization. The cost function \(\:{\prime\:}C{\prime\:}\), defined as the average fidelity, can be highly non-convex, with multiple local optima, making it a challenging landscape for conventional optimization algorithms. The BO in the QNN intelligently explores a high-dimensional, non-convex landscape. It uses a GP with a Radial Basis Function kernel to guide the search towards regions that improve the cost function. This method is beneficial when the number of required copies for each training iteration scales linearly with the sum of network parameters. BO can adaptively update its objective function opinions, making it robust to prediction noise and other sources of quantum errors.

The RBF kernel was selected for a QNN + BO due to its adaptability and smoothness, which reduced overfitting risk, simplified tuning, and ensured efficient BO in the cost function landscape, bypassing local optima in complex parameter spaces.
Ensuring security using BCT in QNN + BO
BCT significantly enhances data security and management in the proposed Agri-IoT. It proposes decentralized control across multiple tiers (LADH, PAFDH, CAFDH, CADH), reducing central vulnerabilities. Integrating the EC³ within the Crypto Assistant ensures advanced encryption, which is vital for securing agricultural data. The MAC policy and Intelligent Data Assistants like the Key Generation Assistant (KGA) and Signature Verification Assistant (SVA) leverage BCT for robust access control and transaction integrity. This bolsters security and increases operational efficiency, making the system a highly effective solution for agricultural challenges. This multi-tiered BC for Agri-IoT prioritizes robust security mechanisms across its layers.
Key components contributing to this secure model include:
-
1.
EC³ Algorithm: A hybrid model combining ECC with the unpredictability of chaotic maps, enhancing encryption robustness against traditional cryptographic attacks.
-
2.
COA For FS: Tailored for FS, the COA ensures an optimal balance in ML, contributing to system security by preventing overfitting and underfitting.
-
3.
QNN + BO enables efficient navigation of complex variable spaces, securing the learning process against computational vulnerabilities.
-
4.
MAC Policy: Implemented at every model tier, the MAC policy is essential for authenticating and authorizing data transactions, ensuring that only verified interactions occur within the system.
-
5.
Intelligent Data Assistants (IDA): Comprising numerous specialized assistants (KGA, AuthA, AuthzA, CryptA, SigVerA), each plays a significant role in security operations, ranging from key generation to data encryption and SVA.
-
6.
Layered Security Method: The model’s tier-based design (Edge, Fog, Cloud) incorporates tier-specific security protocols, ensuring comprehensive and effective defense against probable attacks.
By incorporating these elements, the model addresses unique security challenges in Agri-IoT and sets a new standard for a secure, efficient, and reliable model within the agricultural sector.
link
