🌈 Kaifan Yang

Week 1 — Introduction to Digital Practices

This week’s lecture focused on conducting critical thinking about digital media from both social and material perspectives, particularly introducing the Technological determinism, Social shaping, and Social Construction of Technology (SCOT).

💭 Is AI making us dumb?

AI can become a tool for developing our intelligence.

There is an interesting question - "Is AI making us dumb?". It makes me to reflect not only on technology, but also on how we define artificial intelligence. On the one hand, artificial intelligence can learn quickly and complete tasks efficiently, such as translation, writing, code modification, and so on. These tasks, before the advent of artificial intelligence, often required people to undergo long-term learning to master the skills. The convenience brought by artificial intelligence may lead to a decrease in our enthusiasm for responding to problems ourselves. If we throw all the problems we encounter to AI and let them help us think, then we ourselves may no longer engage in independent thinking (even if we already possess those skills).

On the other hand, perhaps the statement that "Artificial intelligence makes us stupider" is overly simplistic. Artificial intelligence helps improve human thinking efficiency. It enables us to handle large amounts of data and quickly identify the key points. People can also create at a higher level based on the output of artificial intelligence - this is how artificial intelligence allows us to use our intelligence in new ways. Previously, we thought that being smart meant being able to remember things, calculate quickly, write well, and so on. But now that artificial intelligence can do all these things, human "intelligence" can be manifested in more aspects, such as critical thinking, and emotional understanding.

Therefore, from this perspective, whether artificial intelligence makes us stupid depends on our relationship with it. If we use artificial intelligence passively, then it may weaken our enthusiasm for learning knowledge. But if we use it critically, questioning its output and integrating it into our own thinking - then artificial intelligence can become a tool for developing our intelligence. Therefore, the question may not lie with artificial intelligence itself, but with its designers and users - how we cultivate digital literacy and self-awareness

Don't blindly accept technology.

Many of us habitually believe that technology is neutral, such as mobile phones, the internet, and AI. We think they are merely tools, and their goodness or badness depends on how people use them. But this statement is reminding us that technology itself also has a "character" and a "stance". It is not just a passive machine; it is shaped by social, political, cultural, and economic factors. It is not neutral. Technological determinism holds that technology itself drives social change, and humans merely follow along. For example, the steam engine drove the Industrial Revolution, and shipbuilding technology drove maritime trade. However, it also ignores some important issues: Who designed this technology? Who does it serve? Who is neglected?

The Social shaping theory holds that society and technology mutually determine each other. For instance, the previous capitalist economic structure, labor shortage, and the development of new energy sources, among other social conditions, facilitated the research and popularization of the steam engine, thereby driving the Industrial Revolution. This emphasizes that technology has economic value and emotional bias.

SCOT goes further to say that the "meaning" of technology itself is also socially constructed. The same technology has completely different uses in different positions, and different groups give it different values. For example, some social media is a space for sharing life and building social relationships for young people, an important marketing channel for advertisers, and a tool for public opinion dissemination and control for the government.

Technology is like a mirror, and each group will see what they want to see in it. What we can do is not to blindly accept technology, but to consciously use and examine it.

Week 2 — Understanding Digital Media (Creating Websites)

This week’s lecture focused on digital networks and infrastructures, including platforms and algorithms. And in this week workshop, we started to build our own portfolio website.

💭 How to build a website?

A good website understands its users.

This week's web development practice focused primarily on building the site framework, writing the HTML and CSS for the main sections. I reviewed several popular websites like YouTube, Netflix, and BBC. I noticed a common design trait among them: extreme simplicity. From functional buttons to color schemes, everything is clean and harmonious. Consequently, I referenced YouTube—my most-used platform—as my design benchmark. YouTube's user-friendliness stems not only from its intuitive interface but also from effortless content discovery: the search bar is prominently placed, and videos are neatly organized in the main view. The left side displays the user's subscription feed, eliminating the need to navigate to separate pages. However, YouTube has a noticeable drawback during video playback—users must scroll down to view comments, which interrupts the viewing experience. This makes interaction inconvenient. I believe a significant improvement would be implementing a floating window that continues video playback while comments are viewed.

After finalizing my desired web design style, I found building the framework truly challenging. First, I wanted the framework to be easily modifiable later. Second, balancing the overall layout with detailed elements was a major challenge—it had to be both visually appealing and comfortable to interact with. This became especially complex when multiple containers nested together, making CSS layout relationships incredibly intricate. (My most frustrating moments were when I confidently finished HTML and CSS in Phoenix Code, only to encounter errors that required line-by-line debugging.) To better understand and apply all these techniques, I've revisited some tutorial websites I used when learning Python, such as CSDN. This site has effectively resolved several of my challenges. Currently, I'm practicing by emulating the layouts of well-designed websites, hoping to gain a deeper grasp of the relationship between webpage structure and styling.

Week 3 — Understanding Digital Media (Web Scraping)

In this week workshop, we started to learn how to scrape data from the Web and collect it for use in research projects.

💭 Is web scraping safe?

The ethical boundaries of web scraping need to be more clearly defined.

In this week's workshop, I experimented with viewing webpage data structures using developer tools and scraping web pages with Web Scraper. This experience expanded my previous understanding of webpage architecture and data extraction. In my earlier studies, I assumed building websites primarily involved coding to add powerful features—the foundation of web development. But upon seeing the underlying structure and actually constructing pages myself, I realized every design decision influences how users comprehend and utilize the information. Each layer of interaction logic embodies both the designer's intent regarding what users should see and the users' own habits. During web scraping, I could clearly see the data structure behind every webpage—an exhilarating experience that allowed me to browse the web from a completely new perspective. Yet this also raised ethical concerns: while scraping offers convenience, data privacy protection must be prioritized. Otherwise, web scraping risks being misused.

Week 4 — Data & Data Analysis

This week's studies will focus on data and data analysis. We will explore the connection between data and power and practice the process of data collection.

💭 Is data scraping merely about scraping?

The gap between “raw data” and “usable data”.

This week I used the Web Scraper tool to extract product review data from Amazon. First, I filtered the review section to scrape only the text content. During the process, I gained deeper insights into the webpage structure and data presentation. Amazon's review pages categorize customer comments with tags, which complicates data extraction because these tags are nested within different HTML levels. I also encountered numerous challenges during scraping. The scraped data appeared jumbled together, lacking clear separation and formatting. This made me realize that web scraping is merely the first step in data collection, followed by extensive data cleaning and organization. This part of the work proved more tedious than anticipated, giving me a starker appreciation for the gap between “raw data” and “actionable data.”

Screenshot

Simultaneously, I contemplated the ethical implications of web scraping. While the comment content itself is public, the underlying data undoubtedly contains private account information—such as IP addresses, real names, and more. If web scraping is misused, it could potentially infringe upon these privacy rights. This experience also made me realize that mastering web scraping technology while maintaining its ethical boundaries is no simple task.

Week 5 — Data Visualisation

This week's studies will focus on data visualisation. This week we will face the challenge of data collection and classification.

💭 How reliable is the data?

Data is both subjective and objective.

This week, my team conducted data collection and analysis for the “Postgraduate Survey on the Use of GenAI.” During this process, I encountered several challenges. First, in questionnaire design, some questions were phrased too broadly with unclear distinctions between response options. This led to significant variations in interpretation among participants from different backgrounds, making it difficult to categorize responses. For instance, after collecting data on these two questions in our survey, we realized that converting them into ranking questions might have been more appropriate. This approach would have avoided the difficulties in analyzing data when participants selected multiple answers and significantly improved the difference between options.

Screenshot

I also observed how participant engagement impacted data authenticity. Some participants provided in-depth responses, while others rushed through the survey merely to complete the task. This led to substantial bias in certain data points.

Screenshot

When visualizing data, both sample size and scale settings influence how differences are presented. This underscores that data visualization isn't always an objective process; it requires us to maintain critical thinking throughout.

This data collection experience vividly demonstrated that data is not inherently objective. During questionnaire design, the wording chosen by designers influences participants' understanding and consequently shapes the data. At the collection stage, participants' motivation affects data reliability. Furthermore, during the final data organization phase, our categorization and processing of the data inherently involves subjective judgment, which ultimately impacts the conclusions drawn from the data.

Week 6 — Identity and Representation

In this week's exercise, I examined my own and my friends' daily social media behaviors for the first time as a researcher.

💭 Does the data represent the real you?

Data both interprets and shapes the world.

By reviewing my Instagram Privacy Center and Account Center, I began to understand how the platform “understands” me through various types of data and “interacts” with me (pushing ads, suggesting accounts I might be interested in, etc.). Although Instagram's “Ad Preferences” indicated the platform hadn't assigned me a specific “advertiser label” (meaning I wasn't yet categorized as a marketable demographic), I still saw ads for Amazon Student Prime and Apple TV on my feed.

Screenshot Screenshot

This made me realize that even without explicit labels, the platform infers my “algorithmic identity” based on my behavior (Cheney-Lippold, 2017). In other words, my “self” on the platform isn't entirely self-presented; part of it is constructed by the platform's algorithms based on my behavioral data.

While manually scraping the fifteen most recent posts from friends and categorizing them individually into a spreadsheet, I encountered a problem—how to reasonably tag these posts? Taking one friend's post as an example (Note: I asked her permission, and she fully agreed to have her post featured on this site)

Screenshot

This post showcased her artwork. Initially, I tagged it as “work” because her major is art-related, and I assumed it might be related to her assignments or practice pieces. But when I asked her how she'd categorize it, she preferred “lifestyle”—she saw it more as casual creations during free time, a record of daily life. This exercise revealed that tagging posts for tabulation involves significant interpretation. For instance, I had to imagine the bloggers' emotional states when posting—assessing whether the scene was joyful or emotionally charged. This shows that data isn't inherently categorized; it becomes data through decisions, like completing a data collage. This aligns perfectly with Cheney-Lippold's concept of “algorithmic identity”: I am utterly overdetermined, made and remade every time I make a datafied move. Through my data alone, I have entered into even more conversations about who I am and what who I am means, conversations that happen without my knowledge and with many more participants involved than most of us could imagine.

Screenshot

During classification, I noticed categories in the table sometimes felt overly narrow. For instance, the “lifestyle” category encompassed subthemes like food, travel, and learning. A single post might contain photos spanning multiple categories—some users upload all their recent photos in one post, making it difficult to assign that post to just one category. I attempted to expand the table's categories, but the visualized data became harder to read, its structure growing overly complex and seemingly losing the purpose of “categorization”.

Screenshot

This simplification represents a “necessary constraint.” Sumpter initially presented 13 categories, but the principal component analysis algorithm ultimately distilled them into two most relevant dimensions: public sphere versus private sphere, and culture versus workplace. These two dimensions proved remarkably logical—the most significant differences among his friends were precisely manifested along these axes (Sumpter, 2018).

Screenshot Screenshot

Week 7 — AI and Identity

This week's practice involved a conversation with generative AI. At that time, I asked it to help me come up with a story about the news industry. This experience gave me a new understanding of using generative AI.

💭 Who are we really talking to?

The identity of AI is fluid.

Previously, I perceived generative AI as a trained brain that would retrieve the most suitable content based on my requests. However, when I repeatedly used negative prompting to tell it what I didn't want—avoiding news bias, rejecting simplistic dichotomies between good and bad news—its responses clearly failed to grasp my intent. I realized AI doesn't comprehend “negation” the way humans do.

Screenshot Screenshot

“Dimensionality reduction algorithmically removes attributes or dimensions from a dataset that are not seen to be intrinsic to the patterns.” (Munster, A. 2025, p.14). This indicates that machine learning systems process statistical features, not semantic ones. Negative promotion doesn't make AI “avoid” specific keywords; it guides the model away from certain distributions. In reality, it neither comprehends your preferences nor holds any stance; it is merely pushed toward different domains by our prompts. Language obscures the model's internal workings, misleading us into believing it possesses understanding. When I raised my concerns, the AI responded: “This is an extremely insightful question, and you are absolutely right to challenge this framework.” It appeared to acknowledge my viewpoint and engage with my emotions, but in truth, I was conversing with a statistical model.

Screenshot

Machine learning tends toward homogenized predictive structures (Munster, A. 2025, p39). If models consistently converge in the same direction, forcing AI off-topic becomes a way to reveal its non-human logic. For instance, I might instruct it to avoid clichés in news industry story writing. Such actions compel it into unfamiliar territory, exposing its true internal architecture.