Skip to main content
Graduate coursework

Nanomedicine Awareness IRB Protocol

A 4-person team designed a cross-sectional IRB protocol to measure how Stevens students perceive nanomedicine cancer therapies. I owned the Specific Aims and Hypothesis, Recruitment Methods, and Eligibility Criteria sections. Anonymous, minimal-risk, SONA + Qualtrics, with a stratified subgroup analysis baked into the design.

Role Aims, Recruitment, Eligibility (4-person team)
Timeline Apr - May 2026
Type Clinical Research / IRB Protocol Design
Sample Stevens SONA Pool · n = 80-100
Read time 7 min

The question we asked

Nanomedicine is already in clinical use for cancer treatment. Liposomal doxorubicin and albumin-bound paclitaxel are FDA-approved and prescribed today. But how well do future healthcare consumers, decision makers, and clinicians (today's university students) actually understand these therapies? And what factors shape their willingness to consider them in real treatment scenarios?

Our team designed a full IRB protocol to answer that question. Cross-sectional anonymous online survey, recruited through the Stevens SONA Subject Pool, delivered through Qualtrics. Three specific aims, a testable hypothesis, and a built-in subgroup analysis comparing STEM and non-STEM majors.

The constraint

Minimal-risk, anonymous, no clinical screening

The protocol had to score well on the Research Procedures rubric (20 points, the heaviest section) without crossing into greater-than-minimal risk. That ruled out invasive monitoring, identifiable data, and clinical screening of patients in active cancer treatment.

The design

Self-screening on the consent page

Rather than asking sensitive medical questions during eligibility, the consent page advises that the survey discusses cancer treatments and that anyone currently receiving cancer treatment may decline. That preserves participant autonomy without adding identifiable medical screening data to the protocol.

Protocol flow

The protocol moves a participant from a SONA listing to a credited completion in five steps. Each step is bounded so a participant can withdraw at any point with no penalty to their SONA standing.

Participant Path
SONA Listing Plain language, 5-10 min, 0.25 credit
Qualtrics Consent Self-screen advisory
Survey Items Awareness + 7-pt Likert perception
Stratified Analysis STEM vs non-STEM, OR + descriptives
SONA Credit Auto-granted via integration

Key design decisions

Three specific aims, one testable hypothesis. Aim 1 describes awareness. Aim 2 evaluates perceptions across benefits, risks, trust, and willingness. Aim 3 explores associations with academic major and prior healthcare or research exposure. The single primary hypothesis (STEM majors will show higher awareness and more favorable safety perceptions than non-STEM majors) is what drives the stratified analysis.

SONA only, no targeted outreach. No mass emails, no flyers, no classroom announcements, no social media. Recruiting only through the existing SONA pool keeps selection bias low, eliminates appearance of coercion, and limits the protocol to a population already opted in to research participation.

0.25 SONA credit, no monetary compensation. Modeled on standard 15-minute-increment SONA conventions. Acknowledges participant time without becoming a coercive incentive that would push people who would otherwise decline to participate.

Why this matters for medtech

Designing an IRB protocol forces you to think about consent, recruitment, eligibility, risks, benefits, and analysis as a single integrated package, the same way Design Controls does for a medical device. Every part of the protocol has to be defended to the IRB and to the rubric. That structured rigor is the same skill you bring to a clinical investigation under 21 CFR 812, a post-market clinical follow-up under EU MDR, or any FDA-submitted clinical study under the Common Rule (45 CFR 46).

Protocol foundation

Each component of the protocol is grounded in a specific framework or instrument. None of it was made up to fit the rubric.

Common Rule (45 CFR 46)

Federal regulation governing human subjects research. Anonymous, minimal-risk, opinion-based survey design fits the exempt or expedited review categories under Subpart A.

SONA Subject Pool

Stevens-administered participant pool. Currently enrolled undergraduate and graduate students who voluntarily registered. Standard recruitment infrastructure for educational research at the university.

Qualtrics Survey Platform

Secure, FERPA-compatible survey delivery. Anonymous responses by default. Auto-integration with SONA for credit granting without manual researcher approval.

7-point Likert Scale

Strongly Disagree to Strongly Agree, with explicit Neutral midpoint. Cutoffs above 3 = good perception, below 3 = poor perception. Wider scales improve reliability and validity over 5-point.

Odds Ratio for Stratification

OR = ad/bc on a 2x2 contingency table (STEM vs non-STEM crossed with good vs poor perception). Run separately for major, healthcare exposure, and research exposure. Standard cross-sectional epi tool.

Self-Screen Advisory

Consent-page note that survey discusses cancer treatments and current cancer patients may prefer not to participate. Preserves autonomy without adding identifiable medical questions to the protocol.

Protocol numbers

Numbers from the submitted IRB protocol. Sample size justified for cross-sectional subgroup comparison through the Stevens SONA pool over a single academic term.

80-100
target sample size
3
specific aims
5-10
survey minutes
0.25
SONA credit awarded

Built with

SONA Systems Qualtrics 7-pt Likert Cross-Sectional Design Odds Ratio Analysis 45 CFR 46 (Common Rule) Stratified Sampling Anonymous Consent

Why these choices?

SONA + Qualtrics over targeted outreach. Using existing recruitment and survey infrastructure cuts setup time, satisfies the IRB's expectations for confidentiality, and leverages a population already opted in to research participation. Targeted outreach (emails, flyers, classroom announcements) would have raised concerns about coercion and selection bias.

7-point Likert over 5-point. A 7-point scale gives finer-grained discrimination on perception items, which improves reliability and validity in attitudinal research. The explicit Neutral midpoint also lets respondents who genuinely have no opinion register that without forcing a directional answer.

Odds Ratio over a single t-test. The hypothesis is about subgroup differences in good vs poor perception, which is a categorical comparison. OR with 2x2 contingency tables maps directly to that question and produces effect sizes that stratified-analysis readers expect from cross-sectional epi work.

Limitations we reported

Single institution, convenience sample, no inter-rater work

The Stevens SONA Pool is a convenience sample at one engineering-heavy university. Results would not generalize to non-Stevens, non-undergraduate, or non-US populations without external replication. The survey instrument itself is novel for nanomedicine awareness, so we could not piggyback on a validated instrument and would need a small psychometric pilot before scaling. Inter-rater reliability is not relevant for a self-administered survey, but inter-item reliability (Cronbach's alpha across the perception items) would be a v2 addition. The protocol is also a design, not yet a fielded study; submitted, not run.

What I learned

An IRB protocol is a design controls document for a clinical study. The aims map to user needs. The procedures map to the design. The risks map to the risk management file. The eligibility map to the use specification. Every section has to defend itself against a reviewer the same way every Design Controls section has to defend itself against an FDA auditor. Once you see the parallel, the protocol stops being a paper exercise and starts being a methodology.

Recruitment is the most under-thought part of most protocols. Our first instinct was to spread recruitment across SONA, classroom announcements, and email blasts to maximize sample size. Walking through the IRB lens flipped that: every additional channel adds coercion risk, confidentiality risk, and selection bias. SONA-only is harder for headcount but cleaner on every other axis. The right answer was the smaller surface area.

Eligibility criteria are where ethics and statistics intersect. Excluding patients in active cancer treatment looks protective on the surface. But adding that exclusion would require a clinical screening question, which forces collection of identifiable medical data into an otherwise anonymous protocol. The cleaner answer is a self-screen advisory on the consent page that lets the participant make their own decision. That swap, exclusion criterion to consent advisory, was the most useful design lesson of the project.

What I got wrong.

The protocol scored well, but a real submission to a real IRB would catch real gaps.

01
No formal sample-size justification.

We targeted 80-100 participants because that fits the SONA pool over a single term and gives reasonable subgroup cell sizes for the OR calculation. But we did not run a power calculation against a specific minimum detectable effect size for the STEM versus non-STEM comparison. A real submission would include that calculation, with assumed proportions for good perception in each group and the alpha and power thresholds. Without it, the sample size is plausible but not defensible.

02
No piloting of the survey instrument.

The protocol describes the survey items conceptually (awareness, benefits, risks, trust, willingness) but does not include the instrument itself, validation evidence, or a piloting plan. Survey instruments need cognitive interviewing or a small pilot to catch ambiguous wording, ceiling and floor effects, and inter-item reliability problems before a full field. The protocol should have included a 5-10 person pilot phase before the main 80-100 person field.

03
No data-handling plan beyond a 2-year retention window.

The protocol says data will be deleted from all hardware and software 2 years from survey launch. It does not specify how data is stored during that 2-year window (encrypted at rest, where, who has access), how it is transmitted between PIs, what happens if a PI leaves Stevens, or what the audit trail looks like. A real IRB would want a data security plan, not just a retention deadline. That gap would have been the most likely revision request.

Answers before the interview.

Three questions a clinical research or validation hiring manager would reasonably ask, answered up front.

Q1
This is a survey protocol, not a device study. How does it apply to medtech?

Three transferable skills. First, the structure of an IRB protocol parallels the structure of a clinical investigation under 21 CFR 812 and post-market clinical follow-up under EU MDR. Same language, same logic, same defensive posture against reviewers. Second, the design choices on consent, eligibility, recruitment, and risk classification are the same choices clinical V&V teams make on a regular basis. Third, the discipline of mapping a hypothesis to specific aims to specific procedures is exactly how a Design Verification protocol gets built. The medical context is different. The methodology is the same.

Q2
You owned three sections in a 4-person protocol. What was the actual contribution?

Specific Aims and Hypothesis (Aim 1, Aim 2, Aim 3, the STEM hypothesis), Recruitment Methods (SONA-only design, consent flow, advisory wording, compensation justification), and Eligibility Criteria (inclusion, exclusion, rationale for the self-screen instead of a clinical exclusion). I also wrote and delivered the slides covering those three sections in the team presentation. The Background, Statistical Analysis, Risks, and Benefits sections were owned by my teammates. I only claim what I wrote.

Q3
How would you adapt this protocol for a real product post-market study?

Different goals would change the design substantially. A post-market clinical follow-up study would need explicit performance and safety endpoints, larger and more representative sampling than a single-university convenience pool, validated instruments for whatever clinical outcome is being measured, and a data-security plan that satisfies HIPAA in the US and GDPR in the EU. Recruitment would shift to clinical sites with informed consent administered in person. Risk classification would likely be greater than minimal, which would change the IRB review path from exempt or expedited to full board. The conceptual map (aims, procedures, eligibility, risks, benefits, analysis) is the same. The implementation gets a lot heavier.

Interested in this work?

Clinical study protocol design, structured rigor on consent and recruitment, and the regulatory awareness to translate a research design into a Design Controls deliverable. These are skills I would bring to a clinical affairs, validation, or regulatory team.