Summary
All models included random slopes for Condition by participant, as well as random intercepts for both participants and images.
Notably, only the attractiveness model successfully converged when random slopes for Relevance were added at the image level. While it is generally recommended to specify the maximal random-effects structure justified by the design to improve generalizability Bar et al., 2023 , maintaining consistency across models is also important to enable meaningful comparisons, hence the latter was prioritised.
The effect of image condition (AI-generated vs. photo) differed significantly between females and males.
For females, images labeled as AI-generated were rated lower in arousal, beauty, and trustworthiness when the images were irrelevant. When the images were relevant, AI-generated labels led to lower ratings in beauty and trustworthiness.
For males, the only significant effect of condition was that irrelevant images labeled as AI-generated were rated lower in beauty.
Moderators for subjective ratings accounting for the effect of Condition was only significant for females . Specifically:
Females’ attractiveness ratings of irrelevant images labelled AI-generated was moderated by Honesty-Humility ratings (see figures below)
Females’ beauty ratings of relevant images labelled AI-generated was moderated by Conscientiousness ratings (see figures bellow)
Female’s trustworthiness ratings of irrelevant images were moderated by Openness to Experience , Honesty-Humility and Conscientiousness .
While, female’s trsutworthiness ratings of relevant images were moderated by Emotionality and Conscientiousness .
Data Preparation
Code
library (tidyverse)
library (easystats)
library (patchwork)
library (ggside)
results_table <- function (model, effects= "fixed" , filter= NULL ) {
if ("marginaleffects" %in% class (model)) {
model |>
parameters:: parameters () |>
as.data.frame () |>
select (- Parameter, - SE, - S, z= Statistic) |>
insight:: format_table () |>
parameters:: display ()
} else {
display (parameters:: parameters (model, effects= effects, keep= filter))
}
}
Code
df <- read.csv ("../data/data_participants.csv" )
dftask <- read.csv ("../data/data_task.csv" ) |>
mutate (Condition = fct_relevel (Condition, "Photograph" , "AI-Generated" ),
Relevance = fct_relevel (Relevance, "Relevant" , "Irrelevant" ),
Attractiveness = Attractiveness / 6 ,
Beauty = Beauty / 6 ,
Trustworthiness = Trustworthiness / 6 ,
Realness = Realness / 6 + 0.5 ,
RealnessBelief = ifelse (Realness > 0.5 , 1 , 0 ),
Gender = fct_relevel (Gender, "Male" , "Female" ))
dftask <- full_join (dftask,
select (df, Participant, starts_with ("HEXACO" ), - ends_with ("_NR" ), - ends_with ("_R" )),
by= "Participant" )
Visualisation of Variables
Code
dftask |>
ggplot (aes (x= Attractiveness, fill= Condition)) +
geom_bar (aes (y = after_stat (prop)), position= "dodge" ) +
facet_grid (Relevance~ Gender) +
theme_bw ()
This model looks at the effect of Gender and Relevance on attractiveness scores, accounting for random variabilty due to participants and items (i.e., random effects).
Females did not rate faces significantly higher than males in attractiveness. However, both genders rated irrelevant images lower in attractiveness than relevant ones.
Code
m_a<- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance +
(Relevance | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_a)
Fixed Effects
(Intercept)
-0.72
0.13
(-0.97, -0.46)
-5.50
< .001
Gender (Female)
0.15
0.14
(-0.13, 0.43)
1.03
0.304
Gender (Male) × RelevanceIrrelevant
-0.82
0.14
(-1.08, -0.55)
-6.05
< .001
Gender (Female) × RelevanceIrrelevant
-0.29
0.08
(-0.44, -0.13)
-3.61
< .001
Code
estimate_relation (m_a) |>
ggplot (aes (x= Relevance, y= Predicted)) +
geom_pointrange (aes (ymin= CI_low, ymax= CI_high, color= Relevance), position= position_dodge (width= 0.5 )) +
scale_color_manual (values= c ("Relevant" = "#03A9F4" , "Irrelevant" = "#FF9800" ), guide = "none" ) +
labs (y= "Attractiveness" ) +
facet_wrap (~ Gender) +
theme_bw ()
Code
dftask |>
ggplot (aes (x= Beauty, fill= Condition)) +
geom_bar (aes (y = after_stat (prop)), position= "dodge" ) +
facet_grid (Relevance~ Gender) +
theme_bw ()
This model looks at the effect of Gender and Relevance on beauty scores, accounting for random variabilty due to participants and items (i.e., random effects).
Females rated faces significantly higher than males in beauty. Males rated irrelevant images as more beautiful than relevant ones, whilst females rated irrelevant images as less beautiful than relevent ones, both effects were significant.
Code
m_b<- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance +
(Relevance | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_b)
Fixed Effects
(Intercept)
-0.61
0.12
(-0.84, -0.39)
-5.29
< .001
Gender (Female)
0.57
0.12
(0.32, 0.81)
4.55
< .001
Gender (Male) × RelevanceIrrelevant
0.23
0.07
(0.09, 0.38)
3.13
0.002
Gender (Female) × RelevanceIrrelevant
-0.10
0.04
(-0.19, -0.01)
-2.25
0.024
Code
estimate_relation (m_b) |>
ggplot (aes (x= Relevance, y= Predicted)) +
geom_pointrange (aes (ymin= CI_low, ymax= CI_high, color= Relevance), position= position_dodge (width= 0.5 )) +
scale_color_manual (values= c ("Relevant" = "#03A9F4" , "Irrelevant" = "#FF9800" ), guide = "none" ) +
labs (y= "Beauty" ) +
facet_wrap (~ Gender) +
theme_bw ()
Code
dftask |>
ggplot (aes (x= Trustworthiness, fill= Condition)) +
geom_bar (aes (y = after_stat (prop)), position= "dodge" ) +
facet_grid (Relevance~ Gender) +
theme_bw ()
This model looks at the effect of Gender and Relevance on trustworthiness scores, accounting for random variabilty due to participants and items (i.e., random effects).
Females rated faces higher in trustworthiness then males, however this effect was not significant. Males rated irrelevant images as significantly more trustworthy than relevant ones, whereas the same effect was not significant for women.
Code
m_t<- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance +
(Relevance | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_t)
Fixed Effects
(Intercept)
-0.19
0.11
(-0.41, 0.03)
-1.67
0.095
Gender (Female)
0.11
0.13
(-0.14, 0.35)
0.86
0.391
Gender (Male) × RelevanceIrrelevant
0.19
0.06
(0.08, 0.30)
3.25
0.001
Gender (Female) × RelevanceIrrelevant
0.02
0.04
(-0.05, 0.09)
0.51
0.611
Code
estimate_relation (m_b) |>
ggplot (aes (x= Relevance, y= Predicted)) +
geom_pointrange (aes (ymin= CI_low, ymax= CI_high, color= Relevance), position= position_dodge (width= 0.5 )) +
scale_color_manual (values= c ("Relevant" = "#03A9F4" , "Irrelevant" = "#FF9800" ), guide = "none" ) +
labs (y= "Beauty" ) +
facet_wrap (~ Gender) +
theme_bw ()
Code
dftask |>
ggplot (aes (x= Realness, fill= Condition)) +
geom_bar (aes (y = after_stat (prop)), position= "dodge" ) +
facet_grid (Relevance~ Gender) +
theme_bw ()
The model evaluating the effect of Gender and Relevance on Realness ratings shows no significant effects.
Code
m_r<- glmmTMB:: glmmTMB (Realness ~ Gender / Relevance +
(Relevance | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_r)
Fixed Effects
(Intercept)
-0.12
0.08
(-0.28, 0.04)
-1.45
0.147
Gender (Female)
0.14
0.09
(-0.04, 0.32)
1.55
0.122
Gender (Male) × RelevanceIrrelevant
0.09
0.06
(-0.02, 0.21)
1.55
0.122
Gender (Female) × RelevanceIrrelevant
-0.04
0.04
(-0.12, 0.03)
-1.17
0.244
Code
estimate_relation (m_r) |>
ggplot (aes (x= Relevance, y= Predicted)) +
geom_pointrange (aes (ymin= CI_low, ymax= CI_high, color= Relevance), position= position_dodge (width= 0.5 )) +
scale_color_manual (values= c ("Relevant" = "#03A9F4" , "Irrelevant" = "#FF9800" ), guide = "none" ) +
labs (y= "Realness" ) +
facet_wrap (~ Gender) +
theme_bw ()
Attractiveness
This model examines the effects of Gender, Relevance, and Condition on Attractiveness, while accounting for random variability by including random intercepts for Participants and Stimuli, as well as random slopes for Condition (by Participant) and Relevance (by Stimulus).
The model demonstrated strong conditional fit (R²_conditional = 0.812), indicating that most of the explained variance was due to random effects, particularly between participants (ICC = 0.810).
Notably, only female participants rated irrelevant images as significantly less attractive than relevant ones in the AI-generated condition. No significant effects were observed for male participants or other interaction terms.
The variance decomposition (D_vour) further confirms that most of the explainable variance lies in participant-level intercepts (0.98) and stimulus-level intercepts (0.97), with smaller contributions from the random slopes.
Code
m_attractiveness <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / Condition +
(Condition | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
performance:: performance (m_attractiveness)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
--------------------------------------------------------------------------------
6585.067 | 6585.088 | 6705.355 | 0.830 | 0.012 | 0.828 | 0.182 | 8.698
Code
results_table (m_attractiveness, filter= "Condition" )
Fixed Effects
Gender (Male) × RelevanceRelevant × ConditionAI-Generated
5.49e-03
0.03
(-0.04, 0.06)
0.22
0.829
Gender (Female) × RelevanceRelevant × ConditionAI-Generated
-0.03
0.02
(-0.06, 0.01)
-1.41
0.160
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated
0.02
0.06
(-0.10, 0.13)
0.33
0.739
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated
-0.05
0.02
(-0.08, -0.01)
-2.73
0.006
Code
performance:: performance_dvour (m_attractiveness)
Cannot extract confidence intervals for random variance parameters from
models with more than one grouping factor.
Group Parameter D_vour
1 Participant (Intercept) 0.98015322
2 Participant ConditionAI-Generated 0.09320755
3 Stimulus (Intercept) 0.97355554
Beauty
This model examines the effects of Gender, Relevance, and Condition on Attractiveness, while accounting for random variability by including random intercepts for Participants and Stimuli, as well as random slopes for Condition (by Participant).
The model demonstrated strong conditional fit (R²_conditional = 0.858), indicating that most of the explained variance was due to random effects, particularly between participants (ICC = 0.849).
Females rated relevant images significantly lower in beauty than males as well as rating irrelevant images signifcantly lower in beauty than relevant ones. Lastly, males also rated irrelevant images lower in beauty than relevant ones.
The variance decomposition (D_vour) further declares the smaller contributions from the random slopes.
Code
m_beauty <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / Condition + (Condition | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_beauty, filter= "Condition" )
Fixed Effects
Gender (Male) × RelevanceRelevant × ConditionAI-Generated
-0.01
0.02
(-0.06, 0.03)
-0.61
0.539
Gender (Female) × RelevanceRelevant × ConditionAI-Generated
-0.05
0.02
(-0.08, -0.01)
-2.82
0.005
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated
-0.09
0.04
(-0.18, -1.89e-03)
-2.00
0.045
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated
-0.05
0.01
(-0.08, -0.02)
-3.58
< .001
Code
performance:: performance (m_beauty)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
-----------------------------------------------------------------------------------
-3010.979 | -3010.957 | -2890.691 | 0.858 | 0.062 | 0.849 | 0.175 | 9.737
Code
performance:: performance_dvour (m_beauty)
Cannot extract confidence intervals for random variance parameters from
models with more than one grouping factor.
Group Parameter D_vour
1 Participant (Intercept) 0.9787324
2 Participant ConditionAI-Generated 0.2571985
3 Stimulus (Intercept) 0.9792688
Code
# modelbased::estimate_grouplevel(m_beauty)
Trustworthiness
This model examines the effects of Gender, Relevance, and Condition on Attractiveness, while accounting for random variability by including random intercepts for Participants and Stimuli, as well as random slopes for Condition (by Participant).
The model demonstrated strong conditional fit (R²_conditional = 0.845), indicating that most of the explained variance was due to random effects, particularly between participants (ICC = 0.844).
Females rated relevant images significantly lower in trustworthiness than males as well as rating irrelevant images signifcantly lower in trustworthiness than relevant ones. Males did not rate irrelevant images significantly lower in trustworthiness than relevant ones.
The variance decomposition (D_vour) further confirms that most of the explainable variance lies in participant-level intercepts (0.98) and stimulus-level intercepts (0.96).
Code
m_trustworthiness <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / Condition + (Condition| Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_trustworthiness, filter= "Condition" )
Fixed Effects
Gender (Male) × RelevanceRelevant × ConditionAI-Generated
-0.05
0.03
(-0.11, 5.00e-03)
-1.79
0.073
Gender (Female) × RelevanceRelevant × ConditionAI-Generated
-0.12
0.02
(-0.16, -0.07)
-5.39
< .001
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated
0.01
0.05
(-0.08, 0.11)
0.28
0.782
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated
-0.10
0.02
(-0.14, -0.06)
-5.07
< .001
Code
performance:: performance (m_trustworthiness)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
--------------------------------------------------------------------------------
-482.908 | -482.887 | -362.620 | 0.845 | 0.009 | 0.844 | 0.191 | 8.437
Code
performance:: performance_dvour (m_trustworthiness)
Cannot extract confidence intervals for random variance parameters from
models with more than one grouping factor.
Group Parameter D_vour
1 Participant (Intercept) 0.9762748
2 Participant ConditionAI-Generated 0.5348439
3 Stimulus (Intercept) 0.9606022
Coupling - Does fiction create a decoupling?
This model evaluates how beauty and image condition (e.g., AI-generated or not) jointly influence attractiveness ratings, and how this relationship is further moderated by the relevance of the image and the gender of the participant.
Beauty ratings are a strong predictor of attractiveness, but the strenght of this relationship varies by gender and relevance. Condition does not significantly moderate the beauty-attractiveness relationship, indicating that fiction does not create a decoupling between beauty and attractiveness.
Code
m_coupling1 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Beauty * Condition) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_coupling1, filter= "Beauty" )
Fixed Effects
Gender (Male) × RelevanceRelevant × Beauty
3.64
0.08
(3.48, 3.79)
44.71
< .001
Gender (Female) × RelevanceRelevant × Beauty
3.19
0.06
(3.08, 3.31)
56.24
< .001
Gender (Male) × RelevanceIrrelevant × Beauty
2.47
0.17
(2.15, 2.80)
14.83
< .001
Gender (Female) × RelevanceIrrelevant × Beauty
2.99
0.05
(2.90, 3.09)
59.60
< .001
Gender (Male) × RelevanceRelevant × Beauty × ConditionAI-Generated
0.06
0.10
(-0.14, 0.26)
0.58
0.560
Gender (Female) × RelevanceRelevant × Beauty × ConditionAI-Generated
-0.02
0.07
(-0.16, 0.12)
-0.30
0.763
Gender (Male) × RelevanceIrrelevant × Beauty × ConditionAI-Generated
0.20
0.23
(-0.26, 0.66)
0.86
0.392
Gender (Female) × RelevanceIrrelevant × Beauty × ConditionAI-Generated
0.05
0.06
(-0.07, 0.18)
0.81
0.415
Code
performance:: performance (m_coupling1)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
------------------------------------------------------------------------------------
-4147.632 | -4147.591 | -3979.228 | 0.903 | 0.485 | 0.812 | 0.135 | 13.765
This model evaluates how beauty and image condition (e.g., AI-generated or not) jointly influence trustworthiness ratings, and how this relationship is further moderated by the relevance of the image and the gender of the participant.
Beauty ratings are a strong predictor of trustowrthiness but condition does not significantly moderate the beauty-trustworthiness relationship. These findings indicate that fiction does not create a decoupling between beauty and trustworthiness.
Code
m_coupling2 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Beauty * Condition) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
# family=glmmTMB::ordbeta(),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_coupling2, filter= "Beauty" )
Fixed Effects
Gender (Male) × RelevanceRelevant × Beauty
0.43
0.02
(0.39, 0.46)
23.56
< .001
Gender (Female) × RelevanceRelevant × Beauty
0.47
0.01
(0.45, 0.50)
37.33
< .001
Gender (Male) × RelevanceIrrelevant × Beauty
0.45
0.03
(0.39, 0.51)
14.39
< .001
Gender (Female) × RelevanceIrrelevant × Beauty
0.44
0.01
(0.42, 0.46)
37.82
< .001
Gender (Male) × RelevanceRelevant × Beauty × ConditionAI-Generated
-0.03
0.02
(-0.08, 9.00e-03)
-1.56
0.119
Gender (Female) × RelevanceRelevant × Beauty × ConditionAI-Generated
-0.03
0.02
(-0.06, 4.98e-03)
-1.64
0.101
Gender (Male) × RelevanceIrrelevant × Beauty × ConditionAI-Generated
-0.06
0.04
(-0.14, 0.02)
-1.50
0.132
Gender (Female) × RelevanceIrrelevant × Beauty × ConditionAI-Generated
4.45e-03
0.01
(-0.02, 0.03)
0.31
0.757
Code
performance:: performance (m_coupling2)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
--------------------------------------------------------------------------------------
-13594.556 | -13594.522 | -13442.191 | 0.494 | 0.209 | 0.360 | 0.173 | 0.174
This model evaluates how attractiveness and image condition (e.g., AI-generated or not) jointly influence trustworthiness ratings, and how this relationship is further moderated by the relevance of the image and the gender of the participant.
Atractiveness ratings are a strong predictor of trustowrthiness, but the strenght of this relationship varies by gender and relevance.
Importantly, Condition (AI-generated vs. real) significantly moderates the attractiveness–trustworthiness relationship only for females rating irrelevant images. In this group, real photos are rated as less trustworthy compared to AI-generated images at comparable levels of attractiveness, indicating a weakening or decoupling of the usual link between attractiveness and trustworthiness when the image is irrelevant and real.
These results suggest that fiction (AI-generated images) affects the attractiveness–trustworthiness association for females evaluating irrelevant images, creating a decoupling between perceived beauty and trustworthiness in this context.
Code
m_coupling3 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Attractiveness * Condition) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
# family=glmmTMB::ordbeta(),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m_coupling3, filter= "Attractiveness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × Attractiveness
0.39
0.02
(0.36, 0.43)
22.90
< .001
Gender (Female) × RelevanceRelevant × Attractiveness
0.42
0.01
(0.39, 0.44)
32.61
< .001
Gender (Male) × RelevanceIrrelevant × Attractiveness
0.32
0.03
(0.25, 0.38)
9.35
< .001
Gender (Female) × RelevanceIrrelevant × Attractiveness
0.37
0.01
(0.35, 0.40)
31.94
< .001
Gender (Male) × RelevanceRelevant × Attractiveness × ConditionAI-Generated
-0.04
0.02
(-0.08, 2.87e-03)
-1.82
0.068
Gender (Female) × RelevanceRelevant × Attractiveness × ConditionAI-Generated
-0.02
0.02
(-0.05, 9.14e-03)
-1.39
0.164
Gender (Male) × RelevanceIrrelevant × Attractiveness × ConditionAI-Generated
-0.06
0.04
(-0.15, 0.02)
-1.49
0.136
Gender (Female) × RelevanceIrrelevant × Attractiveness × ConditionAI-Generated
0.04
0.01
(0.02, 0.07)
3.07
0.002
Code
performance:: performance (m_coupling3)
# Indices of model performance
AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
--------------------------------------------------------------------------------------
-12810.602 | -12810.568 | -12658.237 | 0.488 | 0.170 | 0.383 | 0.176 | 0.177
Moderator
Attractiveness
Code
m1 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_HonestyHumility) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m1, filter= "HEXACO18_HonestyHumility" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 HonestyHumility
0.10
0.10
(-0.10, 0.29)
0.98
0.325
Gender (Female) × RelevanceRelevant × HEXACO18 HonestyHumility
0.03
0.05
(-0.07, 0.14)
0.61
0.539
Gender (Male) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.17
0.11
(-0.03, 0.38)
1.63
0.102
Gender (Female) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.02
0.05
(-0.09, 0.12)
0.28
0.777
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
6.39e-03
0.02
(-0.03, 0.04)
0.35
0.723
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-5.61e-03
0.01
(-0.03, 0.02)
-0.40
0.688
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-0.02
0.05
(-0.12, 0.08)
-0.35
0.725
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-4.32e-04
0.01
(-0.02, 0.02)
-0.04
0.971
Honesty-Humility significantly moderated females’ ratings of irrelevant images labeled as AI-generated vs photos, such that higher Honesty-Humility was associated with lower ratings attractiveness.
Code
m2 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_Openness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m2, filter= "HEXACO18_Openness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Openness
-0.07
0.10
(-0.25, 0.12)
-0.70
0.485
Gender (Female) × RelevanceRelevant × HEXACO18 Openness
-0.05
0.06
(-0.16, 0.07)
-0.80
0.426
Gender (Male) × RelevanceIrrelevant × HEXACO18 Openness
0.09
0.10
(-0.10, 0.28)
0.90
0.369
Gender (Female) × RelevanceIrrelevant × HEXACO18 Openness
-0.02
0.06
(-0.13, 0.09)
-0.38
0.702
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
0.03
0.02
(-6.87e-03, 0.06)
1.56
0.118
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
-0.02
0.01
(-0.04, 0.01)
-1.07
0.287
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
-0.02
0.03
(-0.09, 0.05)
-0.64
0.521
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
-0.03
0.01
(-0.06, -8.57e-03)
-2.64
0.008
Code
predao <- estimate_relation (m2, lenght = 50 )
p1 <- predao |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_Openness , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Irrelevant" ), x= 4 , y= 0.3156 , label= "**" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Attractiveness \n " , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Oppeness to Experiences" ,
caption = "Note. Females Only" )
Code
m3 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_Emotionality) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m3, filter= "HEXACO18_Emotionality" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Emotionality
-0.05
0.11
(-0.26, 0.17)
-0.42
0.676
Gender (Female) × RelevanceRelevant × HEXACO18 Emotionality
-0.03
0.06
(-0.15, 0.09)
-0.55
0.580
Gender (Male) × RelevanceIrrelevant × HEXACO18 Emotionality
-0.14
0.12
(-0.37, 0.09)
-1.19
0.235
Gender (Female) × RelevanceIrrelevant × HEXACO18 Emotionality
-0.03
0.06
(-0.15, 0.09)
-0.45
0.654
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
0.02
0.02
(-0.02, 0.06)
0.85
0.398
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
0.01
0.01
(-0.02, 0.04)
0.76
0.450
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
0.03
0.05
(-0.07, 0.13)
0.57
0.571
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
-4.19e-03
0.01
(-0.03, 0.02)
-0.32
0.748
Code
m4 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_Extraversion) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m4, filter= "HEXACO18_Extraversion" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Extraversion
0.04
0.08
(-0.12, 0.20)
0.50
0.620
Gender (Female) × RelevanceRelevant × HEXACO18 Extraversion
-0.01
0.05
(-0.11, 0.09)
-0.20
0.844
Gender (Male) × RelevanceIrrelevant × HEXACO18 Extraversion
0.15
0.08
(-0.01, 0.32)
1.83
0.067
Gender (Female) × RelevanceIrrelevant × HEXACO18 Extraversion
-0.05
0.05
(-0.15, 0.04)
-1.08
0.280
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-0.02
0.02
(-0.05, 0.02)
-1.02
0.307
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-0.01
0.01
(-0.04, 0.01)
-0.95
0.342
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
0.01
0.03
(-0.06, 0.08)
0.30
0.761
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
1.98e-03
0.01
(-0.02, 0.02)
0.19
0.846
Code
m5 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_Agreeableness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m5, filter= "HEXACO18_Agreeableness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Agreeableness
0.13
0.13
(-0.12, 0.38)
1.02
0.309
Gender (Female) × RelevanceRelevant × HEXACO18 Agreeableness
0.11
0.06
(-0.01, 0.23)
1.73
0.083
Gender (Male) × RelevanceIrrelevant × HEXACO18 Agreeableness
-2.69e-03
0.14
(-0.27, 0.27)
-0.02
0.984
Gender (Female) × RelevanceIrrelevant × HEXACO18 Agreeableness
0.12
0.06
(-9.27e-04, 0.24)
1.94
0.052
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-3.91e-03
0.02
(-0.05, 0.04)
-0.16
0.873
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
2.40e-03
0.02
(-0.03, 0.03)
0.15
0.883
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.04
0.07
(-0.17, 0.09)
-0.64
0.525
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.02
0.01
(-0.05, 2.54e-03)
-1.77
0.077
Code
m6 <- glmmTMB:: glmmTMB (Attractiveness ~ Gender / Relevance / (Condition * HEXACO18_Conscientiousness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m6, filter= "HEXACO18_Conscientiousness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Conscientiousness
0.11
0.09
(-0.07, 0.29)
1.18
0.236
Gender (Female) × RelevanceRelevant × HEXACO18 Conscientiousness
0.02
0.05
(-0.08, 0.11)
0.38
0.704
Gender (Male) × RelevanceIrrelevant × HEXACO18 Conscientiousness
-0.01
0.09
(-0.19, 0.17)
-0.11
0.914
Gender (Female) × RelevanceIrrelevant × HEXACO18 Conscientiousness
0.03
0.05
(-0.06, 0.12)
0.68
0.494
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-1.76e-03
0.02
(-0.04, 0.03)
-0.10
0.922
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
0.02
0.01
(-8.26e-03, 0.04)
1.32
0.188
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-0.04
0.04
(-0.11, 0.03)
-1.23
0.218
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-0.01
0.01
(-0.03, 7.62e-03)
-1.21
0.226
Beauty
Code
m1 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_HonestyHumility) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m1, filter= "HEXACO18_HonestyHumility" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 HonestyHumility
0.17
0.07
(0.03, 0.31)
2.33
0.020
Gender (Female) × RelevanceRelevant × HEXACO18 HonestyHumility
0.10
0.04
(0.02, 0.18)
2.47
0.014
Gender (Male) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.25
0.08
(0.10, 0.40)
3.20
0.001
Gender (Female) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.10
0.04
(0.02, 0.17)
2.44
0.015
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
0.02
0.02
(-0.01, 0.05)
1.14
0.255
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
0.02
0.01
(-1.52e-04, 0.05)
1.95
0.051
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-0.03
0.04
(-0.10, 0.05)
-0.71
0.478
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-6.86e-03
0.01
(-0.03, 0.01)
-0.68
0.498
Code
m2 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_Openness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m2, filter= "HEXACO18_Openness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Openness
-0.07
0.07
(-0.22, 0.07)
-1.02
0.307
Gender (Female) × RelevanceRelevant × HEXACO18 Openness
0.04
0.04
(-0.04, 0.12)
0.95
0.342
Gender (Male) × RelevanceIrrelevant × HEXACO18 Openness
0.05
0.07
(-0.10, 0.20)
0.67
0.500
Gender (Female) × RelevanceIrrelevant × HEXACO18 Openness
0.05
0.04
(-0.04, 0.13)
1.10
0.272
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
0.02
0.02
(-6.15e-03, 0.06)
1.57
0.117
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
0.01
0.01
(-0.01, 0.04)
1.08
0.279
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
0.04
0.03
(-0.02, 0.09)
1.26
0.208
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
-0.02
0.01
(-0.04, 3.85e-03)
-1.60
0.109
Code
m3 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_Emotionality) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m3, filter= "HEXACO18_Emotionality" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Emotionality
-0.12
0.08
(-0.29, 0.04)
-1.48
0.140
Gender (Female) × RelevanceRelevant × HEXACO18 Emotionality
-3.28e-03
0.05
(-0.09, 0.09)
-0.07
0.943
Gender (Male) × RelevanceIrrelevant × HEXACO18 Emotionality
-0.16
0.09
(-0.33, 0.01)
-1.83
0.067
Gender (Female) × RelevanceIrrelevant × HEXACO18 Emotionality
0.03
0.05
(-0.06, 0.12)
0.65
0.517
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
1.00e-03
0.02
(-0.04, 0.04)
0.05
0.957
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
7.10e-03
0.01
(-0.02, 0.03)
0.55
0.583
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
-0.03
0.04
(-0.09, 0.04)
-0.73
0.463
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
-7.66e-03
0.01
(-0.03, 0.02)
-0.65
0.513
Code
m4 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_Extraversion) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m4, filter= "HEXACO18_Extraversion" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Extraversion
0.02
0.06
(-0.10, 0.14)
0.26
0.792
Gender (Female) × RelevanceRelevant × HEXACO18 Extraversion
-0.02
0.04
(-0.10, 0.06)
-0.51
0.609
Gender (Male) × RelevanceIrrelevant × HEXACO18 Extraversion
0.03
0.06
(-0.10, 0.15)
0.43
0.664
Gender (Female) × RelevanceIrrelevant × HEXACO18 Extraversion
-0.07
0.04
(-0.15, 2.90e-03)
-1.88
0.060
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-9.13e-03
0.02
(-0.04, 0.02)
-0.59
0.555
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-1.63e-03
0.01
(-0.02, 0.02)
-0.14
0.890
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
0.03
0.03
(-0.02, 0.08)
1.32
0.188
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
0.01
9.11e-03
(-5.35e-03, 0.03)
1.37
0.170
Code
m5 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_Agreeableness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m5, filter= "HEXACO18_Agreeableness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Agreeableness
0.12
0.10
(-0.07, 0.31)
1.22
0.224
Gender (Female) × RelevanceRelevant × HEXACO18 Agreeableness
0.07
0.05
(-0.03, 0.16)
1.41
0.158
Gender (Male) × RelevanceIrrelevant × HEXACO18 Agreeableness
0.15
0.10
(-0.05, 0.35)
1.50
0.133
Gender (Female) × RelevanceIrrelevant × HEXACO18 Agreeableness
0.08
0.05
(-9.69e-03, 0.17)
1.75
0.080
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.02
0.02
(-0.06, 0.02)
-0.86
0.390
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.01
0.01
(-0.04, 0.02)
-0.75
0.452
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.05
0.04
(-0.13, 0.03)
-1.18
0.237
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.01
0.01
(-0.04, 0.01)
-1.08
0.281
Conscientiousness significantly moderated females’ ratings of relevant images labeled as AI-generated vs photos, such that higher conscientiousness was associated with more positive ratings of beauty.
Additionally, conscientiousness significantly moderated males’ ratings of relevant images that are presented as photos.
Code
m6 <- glmmTMB:: glmmTMB (Beauty ~ Gender / Relevance / (Condition * HEXACO18_Conscientiousness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m6, filter= "HEXACO18_Conscientiousness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Conscientiousness
0.18
0.07
(0.04, 0.31)
2.56
0.011
Gender (Female) × RelevanceRelevant × HEXACO18 Conscientiousness
-0.03
0.04
(-0.10, 0.04)
-0.94
0.346
Gender (Male) × RelevanceIrrelevant × HEXACO18 Conscientiousness
0.13
0.07
(-8.40e-03, 0.27)
1.84
0.066
Gender (Female) × RelevanceIrrelevant × HEXACO18 Conscientiousness
-0.04
0.04
(-0.10, 0.03)
-1.00
0.316
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
4.28e-03
0.02
(-0.03, 0.04)
0.26
0.793
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
0.02
0.01
(2.76e-03, 0.05)
2.20
0.028
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-2.81e-03
0.03
(-0.06, 0.05)
-0.10
0.920
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-7.55e-03
9.11e-03
(-0.03, 0.01)
-0.83
0.407
Code
predbc <- estimate_relation (m6, lenght = 50 )
p2 <- predbc |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_Conscientiousness , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Relevant" ), x= 3 , y= 0.493 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Beauty" , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Conscientiousness" )
Trustworthiness
Honesty-Humility significantly moderated trustworthiness ratings for both relevant and irrelevant images in both males and females. Higher Honesty-Humility was associated with higher trustworthiness ratings across all conditions, except when images were labelled as AI-generated — where no effects were found except for females rating irrelevant AI-generated images, where higher Honesty-Humility was associated with lower trustworthiness.
Code
m1 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_HonestyHumility) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m1, filter= "HEXACO18_HonestyHumility" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 HonestyHumility
0.16
0.08
(3.72e-03, 0.31)
2.01
0.045
Gender (Female) × RelevanceRelevant × HEXACO18 HonestyHumility
0.13
0.04
(0.05, 0.21)
3.01
0.003
Gender (Male) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.19
0.08
(0.03, 0.35)
2.34
0.019
Gender (Female) × RelevanceIrrelevant × HEXACO18 HonestyHumility
0.10
0.04
(0.02, 0.18)
2.39
0.017
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
0.01
0.02
(-0.02, 0.04)
0.67
0.504
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
1.54e-03
0.01
(-0.02, 0.03)
0.12
0.903
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
6.90e-03
0.04
(-0.07, 0.08)
0.19
0.853
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 HonestyHumility
-0.02
0.01
(-0.04, -1.70e-03)
-2.12
0.034
Code
predthh <- estimate_relation (m1, lenght = 50 )
p3 <- predthh |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_HonestyHumility , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Irrelevant" ), x= 4 , y= 0.485 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Trustworthiness \n " , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Honesty-Humility" ,
caption = "Note. Females Only" )
Openness significantly moderated females’ ratings of irrelevant AI-generated images, such that higher Openness was associated with lower trustworthiness ratings. No other effects were significant.
Code
m2 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_Openness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m2, filter= "HEXACO18_Openness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Openness
-0.05
0.08
(-0.21, 0.10)
-0.69
0.487
Gender (Female) × RelevanceRelevant × HEXACO18 Openness
-0.02
0.05
(-0.11, 0.07)
-0.45
0.650
Gender (Male) × RelevanceIrrelevant × HEXACO18 Openness
5.17e-03
0.08
(-0.15, 0.16)
0.06
0.948
Gender (Female) × RelevanceIrrelevant × HEXACO18 Openness
-0.01
0.05
(-0.10, 0.08)
-0.23
0.817
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
0.01
0.02
(-0.02, 0.04)
0.75
0.455
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Openness
-9.65e-03
0.01
(-0.03, 0.02)
-0.75
0.451
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
-0.04
0.03
(-0.10, 0.02)
-1.37
0.169
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Openness
-0.02
0.01
(-0.05, -3.16e-05)
-1.96
0.050
Code
predto <- estimate_relation (m2, lenght = 50 )
p4 <- predto |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_Openness , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Irrelevant" ), x= 4 , y= 0.485 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Trustworthiness \n " , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Openness to Experience" ,
caption = "Note. Females Only" )
Emotionality significantly moderated females’ trustworthiness ratings of relevant AI-generated images, where higher Emotionality was associated with lower trustworthiness. No other significant effects were found.
Code
m3 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_Emotionality) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m3, filter= "HEXACO18_Emotionality" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Emotionality
-0.03
0.09
(-0.20, 0.15)
-0.29
0.771
Gender (Female) × RelevanceRelevant × HEXACO18 Emotionality
6.78e-03
0.05
(-0.09, 0.10)
0.14
0.891
Gender (Male) × RelevanceIrrelevant × HEXACO18 Emotionality
-0.07
0.09
(-0.25, 0.11)
-0.75
0.452
Gender (Female) × RelevanceIrrelevant × HEXACO18 Emotionality
0.03
0.05
(-0.07, 0.12)
0.54
0.587
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
3.20e-03
0.02
(-0.03, 0.04)
0.17
0.862
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Emotionality
-0.03
0.01
(-0.06, -3.37e-03)
-2.21
0.027
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
6.85e-03
0.04
(-0.06, 0.08)
0.19
0.849
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Emotionality
-0.01
0.01
(-0.03, 0.01)
-0.86
0.392
Code
predte <- estimate_relation (m3, lenght = 50 )
p5 <- predte |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_Emotionality , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Relevant" ), x= 4.25 , y= 0.485 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Trustworthiness \n " , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Emotionality" ,
caption = "Note. Females Only" )
Extraversion significantly moderated males’ ratings of irrelevant images, such that higher Extraversion was associated with higher trustworthiness ratings. No other effects were significant.
Code
m4 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_Extraversion) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m4, filter= "HEXACO18_Extraversion" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Extraversion
0.10
0.07
(-0.03, 0.23)
1.54
0.123
Gender (Female) × RelevanceRelevant × HEXACO18 Extraversion
-3.60e-03
0.04
(-0.08, 0.08)
-0.09
0.930
Gender (Male) × RelevanceIrrelevant × HEXACO18 Extraversion
0.14
0.07
(4.66e-03, 0.27)
2.03
0.042
Gender (Female) × RelevanceIrrelevant × HEXACO18 Extraversion
-0.06
0.04
(-0.14, 0.02)
-1.50
0.135
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-0.01
0.01
(-0.04, 0.02)
-0.69
0.493
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Extraversion
-1.97e-03
0.01
(-0.03, 0.02)
-0.16
0.875
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
-0.01
0.03
(-0.06, 0.04)
-0.45
0.652
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Extraversion
4.62e-03
9.67e-03
(-0.01, 0.02)
0.48
0.633
Agreeableness significantly moderated females’ ratings of relevant images, with higher Agreeableness being associated with higher trustworthiness ratings. Other interactions were non-significant.
Code
m5 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_Agreeableness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m5, filter= "HEXACO18_Agreeableness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Agreeableness
8.31e-03
0.10
(-0.20, 0.21)
0.08
0.937
Gender (Female) × RelevanceRelevant × HEXACO18 Agreeableness
0.11
0.05
(0.01, 0.21)
2.24
0.025
Gender (Male) × RelevanceIrrelevant × HEXACO18 Agreeableness
0.03
0.11
(-0.18, 0.24)
0.30
0.762
Gender (Female) × RelevanceIrrelevant × HEXACO18 Agreeableness
0.09
0.05
(-0.01, 0.18)
1.71
0.088
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-0.02
0.02
(-0.06, 0.03)
-0.72
0.475
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Agreeableness
2.11e-03
0.01
(-0.03, 0.03)
0.15
0.884
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-9.74e-04
0.04
(-0.09, 0.08)
-0.02
0.982
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Agreeableness
-3.43e-03
0.01
(-0.03, 0.02)
-0.26
0.793
Conscientiousness significantly moderated females’ trustworthiness ratings of AI-generated images. Specifically, higher Conscientiousness was associated with higher trustworthiness ratings for relevant AI-generated images and lower trustworthiness ratings for irrelevant AI-generated images.
Code
m6 <- glmmTMB:: glmmTMB (Trustworthiness ~ Gender / Relevance / (Condition * HEXACO18_Conscientiousness) +
(1 | Participant) + (1 | Stimulus),
data= dftask,
family= glmmTMB:: ordbeta (),
control = glmmTMB:: glmmTMBControl (parallel = 8 ))
results_table (m6, filter= "HEXACO18_Conscientiousness" )
Fixed Effects
Gender (Male) × RelevanceRelevant × HEXACO18 Conscientiousness
0.11
0.07
(-0.03, 0.26)
1.55
0.122
Gender (Female) × RelevanceRelevant × HEXACO18 Conscientiousness
-0.06
0.04
(-0.14, 0.01)
-1.61
0.107
Gender (Male) × RelevanceIrrelevant × HEXACO18 Conscientiousness
0.13
0.08
(-0.01, 0.28)
1.77
0.076
Gender (Female) × RelevanceIrrelevant × HEXACO18 Conscientiousness
-0.06
0.04
(-0.13, 0.02)
-1.45
0.146
Gender (Male) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
0.02
0.02
(-0.01, 0.05)
1.28
0.200
Gender (Female) × RelevanceRelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
0.04
0.01
(0.01, 0.06)
3.19
0.001
Gender (Male) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-0.01
0.03
(-0.07, 0.04)
-0.51
0.609
Gender (Female) × RelevanceIrrelevant × ConditionAI-Generated × HEXACO18 Conscientiousness
-0.02
9.65e-03
(-0.04, -2.28e-03)
-2.20
0.028
Code
predtc <- estimate_relation (m6, lenght = 50 )
p6 <- predtc |>
filter (Gender== "Female" ) |>
ggplot (aes (x= HEXACO18_Conscientiousness , y= Predicted)) +
geom_ribbon (aes (ymin= CI_low, ymax= CI_high, fill= Condition), alpha= 0.3 ) +
geom_line (aes (color= Condition), linewidth= 1 , key_glyph = draw_key_rect) +
# geom_text(data=stars1, aes(x=x, y=y, label=label, color=Condition), hjust=0.5, size=3) +
geom_text (data= data.frame (Relevance= as.factor ("Relevant" ), x= 3 , y= 0.485 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
geom_text (data= data.frame (Relevance= as.factor ("Irrelevant" ), x= 3.2 , y= 0.485 , label= "*" ), aes (x= x, y= y, label= label), hjust= 0.5 , color= "#424242" , size= 6 ) +
facet_grid (~ Relevance) +
scale_y_continuous (labels= scales:: percent) +
scale_color_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" )) +
scale_fill_manual (values= c ("AI-Generated" = "#2196F3" , "Photograph" = "#F44336" ), guide= "none" ) +
theme_minimal () +
theme (axis.text.y = element_text (size = 8 ),
strip.placement = "outside" ,
strip.background.x = element_rect (fill= c ("lightgrey" ), color= NA ),
strip.text.x = element_text (size = 10 ),
strip.text.y = element_text (size = 10 ),
axis.text.x = element_text (size= 9 , color= "black" ),
legend.text = element_text (size = 10 )) +
labs (y= "Trustworthiness \n " , fill= "Images presented as:" , color= "Images presented as:" ,
x = " \n Conscientiousness" ,
caption = "Note. Females Only" )
Reality
Please refer to this analysis on the reality determinants for this study . Instead of using a traditional ZOIB model to examine whether realness scores were predicted by Gender and Condition, this analysis employed a novel modelling approach designed to better capture subjective scale responses. Specifically, the new model accounts for both the binary aspect of the decision (e.g., True/False, Agree/Disagree) and the intensity of that choice, quantified by how far the cursor was moved from the scale’s midpoint. This allows for a more nuanced interpretation of responses that may reflect an underlying discrete choice process.
Notes
The attractiveness, beauty and trustworthy model with relevance and condition as random slopes for stimuli were singular.
The beauty and trustworthiness models with either relevance or condition as random slopes for stimuli were singular.
The D-vour score was higher for the models with Relevance as a random slope on Stimulus than for the model with Condition, suggesting that Relevance captures more reliable variability across stimuli. Therefore, Relevance was retained as the random slope for Stimulus (see explanation of D-vour ).