<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Lab Projects | DIPr Lab at PSU</title><link>https://diprlab.github.io/project/</link><atom:link href="https://diprlab.github.io/project/index.xml" rel="self" type="application/rss+xml"/><description>Lab Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 26 Nov 2025 00:00:00 +0000</lastBuildDate><item><title>PaPrica-PS: Fine-Grained, Dynamic Access Control Policy Enforcement for Pub/Sub Systems</title><link>https://diprlab.github.io/project/pubsubcontrol/</link><pubDate>Wed, 26 Nov 2025 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/pubsubcontrol/</guid><description>&lt;p&gt;High-volume publish/subscribe (pub/sub) systems include collections
of hardware and software components such as IoT sensors and the protocols
that connect them. Many of these have heretofore lacked robust security
and privacy controls by default despite there being significant security,
safety, and privacy implications driving the need to control access to
the data they generate and manage.&lt;/p&gt;
&lt;p&gt;Examples of such pub/sub-based systems are those which power critical systems
from smart buildings
and factories to full city-wide device networks.
In this project, we are developing a
fine-grained access control model and enforcement mechanism to
address this gap. Our proposed FGAC model builds upon
Attribute-Based Access Control (ABAC) defining access rules based
on the MQTT protocol message &amp;ldquo;topics&amp;rdquo;, attributes of the subscribers
and publishers to those topics, as well as
ephemeral and per-message context information.&lt;/p&gt;
&lt;p&gt;Our framework is platform-agnostic and we implement the prototype for our
experiments based on an off-the-shelf open source MQTT pub/sub
system without altering the base code of that server itself.&lt;/p&gt;</description></item><item><title>My Privacy Awareness Learning Games (MyPAL Games)</title><link>https://diprlab.github.io/project/eduprivacy/</link><pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/eduprivacy/</guid><description>&lt;p&gt;My Privacy Awareness Learning Games (MyPAL Games) is an educational website design to help children learn about different aspects of online privacy. It presents
lessons in the format of comics, and then quizzes them on
their knowledge after each lesson.&lt;/p&gt;</description></item><item><title>Fine Grained Access Control in Vector Databases</title><link>https://diprlab.github.io/project/vectordb-access-control/</link><pubDate>Thu, 20 Feb 2025 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/vectordb-access-control/</guid><description>&lt;p&gt;Vector databases are particularly well-suited for similarity search using search algorithms like approximate nearest neighbor (ANN) search and they are used in development of Retrieval-Augmented Generation (RAG) systems, to reduce hallucinations in responses of AI systems. One significant challenge in using vector databases, especially in applications like RAG, is ensuring data privacy and security. For example, a clothing company that builds an AI chatbot that interacts with a vector database containing customer orders and product data could expose sensitive customer information without proper access restrictions. Incorporating Fine-Grained Access Control in vector databases is important for enforcing user preferences on data sharing and complying with privacy regulations. This project explores how to embed fine-grained access control within vector databases to ensure secure and privacy-compliant query answering.&lt;/p&gt;</description></item><item><title>Accord</title><link>https://diprlab.github.io/project/accord/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/accord/</guid><description>&lt;p&gt;Users are increasingly adopting collaborative cloud services like Google Drive.
The lack of fine-grained access controls on many cloud services make actions that violate the expectations of other users likely, resulting in multiuser conflicts.
For example, a user with editor permissions may add a user outside the organization and revoke the permissions of another user, all without consent from the original resource owner.
These multiuser conflicts may compromise a resource&amp;rsquo;s confidentiality, integrity, or availability, leading to a lack of trust in cloud services.&lt;/p&gt;
&lt;p&gt;ACCORD is a web application built on top of Google Drive which prevents and detects multiuser conflicts.
It employs a simulator to help users preemptively identify potential conflicts and
assist them in defining action constraints.
Then, using these action constraints, ACCORD can automatically detect future conflicts and suggest resolutions.&lt;/p&gt;
&lt;p&gt;Currently, we are testing the scalability and practicality of ACCORD with larger numbers of users and resources.&lt;/p&gt;</description></item><item><title>BL(u)E CRAB</title><link>https://diprlab.github.io/project/bluecrab/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/bluecrab/</guid><description>&lt;p&gt;Detecting unwanted or suspicious Bluetooth Low Energy (BLE)-based trackers is challenging, due in part to cross-platform compatibility issues, and inconsistent detection methods. BL(u)E CRAB identifies suspicious BLE trackers based on various risk factors including the number of encounters, time with the user, distance traveled with the user, number of areas each device appeared in and device proximity to user. BL(u)E CRAB presents this information in an intuitive way to help users determine which devices pose the biggest threat to them based on their context.&lt;/p&gt;</description></item><item><title>Sieve</title><link>https://diprlab.github.io/project/sieve/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/sieve/</guid><description>&lt;p&gt;SIEVE is a versatile middleware that enhances access control in DBMS, enabling efficient query processing even with a large number of access control policies. We&amp;rsquo;re currently integrating caching to further improve query performance. Additionally, we&amp;rsquo;ve developed a workload generator that simulates various scenarios to test policy models and ensure access control compliance, reflecting real-world conditions.&lt;/p&gt;</description></item><item><title>Tattletale</title><link>https://diprlab.github.io/project/tattletale/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://diprlab.github.io/project/tattletale/</guid><description>&lt;p&gt;Tattletale uses denial constraints to discovery data inferences inside of a database relative to sensitive cells. The cells that make up the denial constraints are then checked to see which cells infer information on them. In the end all the cells that infer data on the sensitive cells and the cells that could be used to reconstruct those inferences are placed into a list which is used to generate a view that does not contain those cells. Since inference can only be reconstructed as long as only one predicate is missing we can use that to minimize how many cells we have to hide. The benefit of Tattletale is that it provides protection against inference which access control lists don&amp;rsquo;t provide. The current challenge is trying to improve the run time performance and decrease the number of cells that have to be hidden while also guaranteeing a certain level of protection against reconstruction.Tattletale uses denial constraints to discovery data inferences inside of a database relative to sensitive cells. The cells that make up the denial constraints are then checked to see which cells infer information on them. In the end all the cells that infer data on the sensitive cells and the cells that could be used to reconstruct those inferences are placed into a list which is used to generate a view that does not contain those cells. Since inference can only be reconstructed as long as only one predicate is missing we can use that to minimize how many cells we have to hide. The benefit of Tattletale is that it provides protection against inference which access control lists don&amp;rsquo;t provide. The current challenge is trying to improve the run time performance and decrease the number of cells that have to be hidden while also guaranteeing a certain level of protection against reconstruction.&lt;/p&gt;</description></item></channel></rss>