ANALYSIS

Philippine Intern Survey Maps Four Categories of AI Tool Use in OJT

M MegaOne AI Apr 1, 2026 4 min read
Engine Score 5/10 — Notable
Editorial illustration for: AI in Work-Based Learning: Understanding the Purposes and Effects of Intelligent Tools Among Stud

A study submitted to arXiv on March 20, 2026 surveyed 384 student interns at Philippine higher education institutions to document how they use AI tools during on-the-job training (OJT). Led by John Paul P. Miranda and seven co-authors, the research used a structured questionnaire to examine which tools students selected, how they applied them to specific workplace tasks, and how they assessed their own confidence and ethical awareness during their training placements.

  • 384 student interns surveyed via structured questionnaire across Philippine higher education institutions
  • ChatGPT was the most frequently reported AI tool; Quillbot, Canva AI, and Grammarly also appeared prominently
  • Four task-use categories identified: productivity and report writing, communication and content drafting, technical assistance and code support, and independent task completion
  • Students reported moderate confidence and described applying AI tools selectively and ethically — not as a default for every task

What Happened

John Paul P. Miranda, together with co-authors Rhiziel P. Manalese, Sheila M. Geronimo, Vernon Grace M. Maniago, Charlie K. Padilla, Aileen P. De Leon, Santa L. Merle, and Mark Anthony A. Castro submitted “AI in Work-Based Learning: Understanding the Purposes and Effects of Intelligent Tools Among Student Interns” to arXiv on March 20, 2026. The paper examines how student interns in the Philippine higher education system engage with AI tools during mandatory OJT placements — structured work experience required for degree completion at most Philippine colleges and universities.

Data were collected from 384 respondents using a structured questionnaire. The instrument captured information on which AI tools students used, which specific workplace tasks they applied them to, and students’ self-assessments of their confidence levels, ethical awareness, and sense of institutional support during their internships.

Why It Matters

On-the-job training is a credit-bearing graduation requirement in Philippine higher education, making intern behavior during these placements professionally formative. Unlike classroom AI use — which has received comparatively more research attention — OJT settings introduce employer expectations, professional norms, and real task accountability that shape how students actually adopt and limit their use of tools.

Most prior studies on student AI adoption focus on academic assignments, exams, or homework. Research on AI use during formal work placements, where students face actual clients, deadlines, and supervisors, is limited. This study contributes direct survey data from that specific context, using a sample of 384 interns across programs in Philippine tertiary institutions.

Technical Details

Analysis of task-based usage produced four categories of AI application among interns: productivity and report writing; communication and content drafting; technical assistance and code support; and independent task completion. These categories were derived from the structured questionnaire responses of all 384 participants.

ChatGPT was the most frequently cited tool across respondents. Quillbot, Canva AI, and Grammarly followed in order of reported frequency. The distribution indicates that language-intensive tasks — drafting documents, writing reports, editing content — drove the largest share of AI tool adoption, with technical assistance and visual tools serving secondary roles.

Students described a “moderate” level of confidence in their AI use during OJT. The paper’s abstract states that interns “applied these tools selectively and ethically during OJT tasks,” indicating purposeful rather than indiscriminate use. The study relied entirely on self-reported survey data collected at a single point in time; it did not include observational methods, controlled experimental conditions, or longitudinal tracking.

Who’s Affected

The findings are most directly relevant to Philippine higher education institutions that require OJT as a degree requirement. Curriculum coordinators and program advisors who currently lack formal AI literacy components in their OJT preparation modules are the specific stakeholders the authors address.

Employers who host student interns are also implicated. When students arrive at placements without structured institutional guidance on responsible AI use, host organizations absorb that onboarding gap informally. The study does not include employer-side data, but it frames uneven student preparation as a systemic concern requiring policy-level intervention rather than individual adjustment.

What’s Next

The study’s authors recommend that higher education programs formally include AI literacy and onboarding as a standard component of OJT preparation. They also call for clear institutional policies on AI tool use and equitable access across campuses, noting that resource disparities could leave students at under-resourced institutions at a disadvantage when entering AI-integrated workplaces.

The paper acknowledges limitations inherent in a cross-sectional, self-report design: the data cannot establish whether AI tool use improved intern performance or long-term employability. No follow-up data collection was described in the abstract. The paper was submitted to arXiv on March 20, 2026 and had not yet undergone formal peer review at time of publication.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy