Wednesday, February 11, 2026
Home Blog Page 14

Deno Sandbox launched for working AI-generated code

0

Deno Land, maker of the Deno runtime, has launched Deno Sandbox, a safe surroundings constructed for code generated by AI brokers. The corporate additionally introduced the long-awaited basic availability of Deno Deploy, a serverless platform for working JavaScript and TypeScript purposes. Each have been introduced on February 3.

Now in beta, Deno Sandbox gives light-weight Linux microVMs working as protected environments within the Deno Deploy cloud. Deno Sandbox defends towards immediate injection assaults, the corporate mentioned, the place a person or AI makes an attempt to run malicious code. Secrets and techniques reminiscent of API keys by no means enter the sandbox and can solely seem when an outbound HTTP request is distributed to a pre-approved host, in response to the corporate.

Deno Sandbox was created in response to the rise in AI-driven growth, defined Deno co-creator Ryan Dahl, as extra LLM-generated code is being launched with the flexibility to name exterior APIs utilizing actual credentials, with out human overview. On this situation, he wrote, “Sandboxing the compute isn’t sufficient. You should management community egress and defend secrets and techniques from exfiltration.” Deno Sandbox gives each, in response to Dahl. It makes a speciality of workloads the place code should be generated, evaluated, or safely executed on behalf of an untrusted person.

Apple’s M5 Extremely secret could have been spilled

0

China joins race to develop space-based information facilities with 5-year plan

0

It appears like China is getting in on the race to launch information facilities into house.

The state-run China World Tv Community (CGTN) reported on Thursday (Jan. 29) that the primary Chinese language house firm, the state-owned China Aerospace Science and Expertise Company (CASC), will work on space-based information facilities as part of a bigger five-year plan to increase the nation’s already vital presence in house.

I Requested Claude to Replicate a PNAS Paper Utilizing OpenAI’s Batch API. This is What Occurred (Half 1)

0


I’ve been experimenting with Claude Code for months now, utilizing it for every thing from writing lecture slides to debugging R scripts to managing my chaotic tutorial life. However most of these duties, if I’m being sincere, are issues I might do myself. They’re simply sooner with Claude.

Final week I made a decision to attempt one thing completely different. I needed to see if Claude might assist me do one thing I genuinely didn’t know learn how to do: pull in a replication package deal from a printed paper, run NLP classification on 300,000 textual content information utilizing OpenAI’s Batch API, and evaluate the outcomes to the unique findings.

That is Half 1 of that story. Half 2 could have the outcomes—however I’m scripting this earlier than the batch job finishes, so we’re all in suspense collectively. Right here’s the video stroll via. I’ll submit the second as soon as that is finished and we’ll test it collectively.

Thanks once more for all of your assist! These Claude Prices, and the substack extra typically, are labors of affection. Please contemplate turning into a paying subscriber! It’s $5/month, $50/yr or founding member costs of $250! For which I may give you full and complete consciousness in your dying mattress in return.

The paper I selected was Card, et al. (PNAS 2022): “Computational evaluation of 140 years of US political speeches reveals extra constructive however more and more polarized framing of immigration.” Let me dive in and inform you about it. Should you haven’t learn it, the headline findings are hanging:

  1. General sentiment towards immigration is MORE constructive at present than a century in the past. The shift occurred between WWII and the 1965 Immigration Act.

  2. However the events have polarized dramatically. Democrats now use unprecedentedly constructive language about immigrants. Republicans use language as damaging as the common legislator throughout the Nineteen Twenties quota period.

The authors categorised ~200,000 congressional speeches and ~5,000 presidential communications utilizing a RoBERTa mannequin fine-tuned on human annotations. Every speech phase was labeled as PRO-IMMIGRATION, ANTI-IMMIGRATION, or NEUTRAL. However my query was can we replace this paper utilizing a contemporary massive language mannequin to do the classification? And may we do it stay with out me doing something aside from dictating to Claude Code the duty?

If the reply to this query is sure, then it means researchers can use off-the-shelf LLMs for textual content classification at scale—cheaper and sooner than coaching customized fashions—and for many people, that’s an ideal lesson to be taught. However I believe this train additionally doubles to indicate that even when you really feel intimidated by such a activity, you shouldn’t, as a result of I mainly do that whole factor by typing my directions in, and letting Claude Code do the complete factor, together with discovering the replication package deal, unzipping and extracting the speeches!

If no, we be taught one thing about the place human-annotated coaching knowledge nonetheless issues. And we be taught possibly that this use of Claude Code to do all this through “dictation” is possibly additionally not all it’s cracked as much as be.

Let me be clear about what makes this troublesome:

  1. Scale. We’re speaking about 304,995 speech segments. You possibly can’t simply paste these into ChatGPT one by one.

  2. Infrastructure. OpenAI’s Batch API is the appropriate device for this—it’s 50% cheaper than real-time API calls and may deal with huge jobs. However setting it up requires understanding file codecs, authentication, job submission, outcome parsing, and error dealing with.

  3. Methodology. Even when you get the API working, you must think twice about immediate design, label normalization, and learn how to evaluate your outcomes to the unique paper’s.

  4. Coordination. The replication knowledge lives on GitHub. The API key lives someplace safe. The code must be modular and well-documented. The outcomes have to be interpretable.

I needed to see if Claude Code might deal with the entire pipeline—from downloading the info to submitting the batch job—whereas I watched and requested questions.

I began by telling Claude what I needed to do utilizing one thing referred to as “plan mode”. Plan mode is a button you pull down on the desktop app. I’ve a protracted forwards and backwards with Claude Code about what I need finished, he works it out, I evaluate it, after which we’re prepared, I agree and he does it. If nothing else, watching the video, and skipping to plan mode, you may see what I did.

So what I did was I saved the paper myself regionally (as I had a sense he may not might get into the PNAS button factor to get it however who is aware of), then defined precisely what I needed finished. However what I did was I defined my request backwards. That’s I instructed him what I needed on the very finish, which was a classification of the speeches the authors had however utilizing OpenAI batch requested classification with the gpt-4o-mini LLM. After which I labored backwards from there and stated I needed an explainer deck, I needed an audit utilizing referee2 earlier than he ran it, I needed a cut up pdf utilizing my pdf-splitter talent at my repo, and so forth. It’s simpler to clarify when you watch it.

So as soon as we agreed, and after some tweaking issues in plan mode,Claude instantly did one thing I appreciated: it created a self-contained mission folder moderately than scattering recordsdata throughout my present course listing.

workout routines/llm_replication/
├── article/
│   └── splits/           # PDF chunks + notes
├── code/
│   ├── 01_prepare_data.py
│   ├── 02_create_batch.py
│   ├── 03_submit_batch.py
│   ├── 04_download_results.py
│   └── 05_compare_results.py
├── knowledge/
│   ├── uncooked/              # Downloaded replication knowledge
│   ├── processed/        # Cleaned CSV
│   ├── batch_input/      # JSONL recordsdata for API
│   └── batch_output/     # Outcomes
├── deck/
│   └── deck.md           # Presentation slides
├── plan.md
└── README.md

This construction made sense to me. Every script does one factor. The information flows from uncooked → processed → batch_input → batch_output → outcomes. If one thing breaks, you recognize the place to look. So this is kind of replicable and I can use this to indicate my college students subsequent week after we evaluate the paper and replicate it kind of utilizing an LLM, not the methodology that they used.

The replication package deal from Card et al. is 1.39 GB. How do I do know that? As a result of Claude Code searched and located it. He discovered it, pulled the zipped file into my native listing, and noticed that utilizing no matter device it’s within the terminal that permits you to test the file dimension. Right here’s the place he put it.

The zipped file he then unzipped and positioned in that ./knowledge listing. It contains the speech texts, the RoBERTa mannequin predictions, and the unique human annotations. So that is now the PNAS, from the bottom up.

When Claude downloaded the info and explored the construction, right here’s what we discovered:

  • Congressional speeches: 290,800 segments in a .jsonlist file

  • Presidential communications: 14,195 segments in a separate file

  • Every document contains: textual content, date, speaker, celebration, chamber, and the unique mannequin’s likelihood scores for every label

It’s slightly completely different than what the PNAS says curiously which stated there are 200,000 congressional speeches and 5,000 presidential communications. This got here out to 305,000. So I stay up for digging extra into that.

I even have the unique paper’s classifier outputs chances for all three courses. If a speech has chances (anti=0.6, impartial=0.3, professional=0.1), we take the argmax: ANTI-IMMIGRATION. This was from their very own evaluation.

However Claude wrote 01_prepare_data.py to load each recordsdata, extract the related fields, compute the argmax labels, and save every thing to a clear CSV. Working it produced:

Complete information: 304,995

--- Authentic Label Distribution ---
  ANTI-IMMIGRATION: 48,234 (15.8%)
  NEUTRAL: 171,847 (56.3%)
  PRO-IMMIGRATION: 84,914 (27.8%)

Most speeches are impartial—which is sensible. Plenty of congressional speech is procedural.

Or are they impartial? That’s what we’re going to discover out. When now we have the LLM do the classification, we’re going to see if possibly there may be nonetheless more money on the desk. After which we’ll create a transition matrix to see what the LLM categorised as ANP and what the unique authors categorised as ANP. We’ll see if some issues are getting shifted round.

That is the place it will get fascinating. How do you inform gpt-4o-mini to categorise political speeches the identical means a fine-tuned RoBERTa mannequin did?

Claude’s first draft was detailed—possibly too detailed:

You're a analysis assistant analyzing political speeches about immigration...

Classification classes:

1. PRO-IMMIGRATION
   - Valuing immigrants and their contributions
   - Favoring much less restricted immigration insurance policies
   - Emphasizing humanitarian considerations, household unity, cultural contributions
   - Utilizing constructive frames like "hardworking," "contributions," "households"

2. ANTI-IMMIGRATION
   - Opposing immigration or favoring extra restrictions
   - Emphasizing threats, crime, illegality, or financial competitors
   - Utilizing damaging frames like "unlawful," "criminals," "flood," "invasion"
   ...

I had a priority: by itemizing particular key phrases, had been we biasing the mannequin towards pattern-matching moderately than semantic understanding?

That is precisely the type of methodological query that issues in analysis. Should you inform the mannequin “speeches with the phrase ‘flood’ are anti-immigration,” you’re not likely testing whether or not it understands tone—you’re testing whether or not it may possibly grep.

We determined to maintain the detailed immediate for now however flagged it as one thing to revisit. A less complicated immediate would possibly truly carry out higher for a replication research, the place you need the LLM’s unbiased judgment. However, what I believe I’ll do is a component 3 the place we do resubmit with a brand new immediate that doesn’t lead the llm as a lot as I did, however I believe it’s nonetheless helpful simply to see even utilizing the unique prompting, whether or not this extra superior llm, which has much more talent at discerning context than earlier ones (even Roberta), would possibly come to the identical or completely different conclusions.

So now we get into the OpenAI half. I absolutely perceive that this half is a thriller to many individuals. Simply what am I precisely going to be sensible doing on this fourth step? And that’s the place I believe relying Claude Code for assist in answering your questions, in addition to studying learn how to do it, after which utilizing referee2 to audit the code, goes to be useful. However right here’s the gist.

To get the classification of the speeches finished, now we have to add these speeches to OpenAI. However OpenAI’s Batch API expects one thing referred to as JSONL recordsdata the place every line is an entire API request. So, with out me even explaining learn how to do it, Claude wrote 02_create_batch.py to generate these.

Just a few technical particulars that matter:

  • Chunking: We cut up the 304,995 information into 39 batch recordsdata of 8,000 information every. This retains file sizes manageable.

  • Truncation: Some speeches are very lengthy. We truncate at 3,000 characters to suit inside context limits. Claude added logging to trace what number of information get truncated.

  • Price estimation: Earlier than creating something, the script estimates the overall value:

--- Estimated Price (gpt-4o-mini with Batch API) ---
  Enter tokens:  140,373,889 (~$10.53)
  Output tokens: 1,524,975 (~$0.46)
  TOTAL ESTIMATED COST: $10.99

Lower than eleven {dollars} to categorise 300,000 speeches! That’s outstanding. Just a few years in the past, this might have required coaching your personal mannequin or paying for costly human annotation. However now for $11 and what will take a mere 24 hours I — a one man present, doing all of this inside an hour over a video — bought this submitted! Un. Actual.

03_submit_batch.py is the place cash will get spent, so Claude inbuilt a number of security options:

  1. A --dry-run flag that exhibits what could be submitted with out truly submitting

  2. An specific affirmation immediate that requires typing “sure” earlier than continuing

  3. Retry logic with exponential backoff for dealing with API errors

  4. Monitoring recordsdata that save batch IDs so you may test standing later

I appreciated the defensive programming. While you’re about to spend cash on an API name, you need to make sure you’re doing what you propose.

Right here’s the place issues bought meta.

I’ve a system I discussed the opposite day referred to as personas. And the one persona I’ve thus far is an aggressive “auditor” referred to as “Referee 2”—I take advantage of him by opening a separate Claude occasion, in order that I don’t have Claude Code reviewing its personal code. This second Claude Code occasion is referee 2. It didn’t write the code we’re utilizing to submit the batch requests. It’s sole job is to evaluate the opposite Claude Code’s code with the important eye of an educational reviewer after which write a referee report. The thought is to catch issues earlier than you run costly jobs or publish embarrassing errors.

So, I requested Referee 2 to audit the complete mission: the code, the methodology, and the presentation deck. And you may see me within the video doing this. The report got here again with a advice of “Minor Revision Earlier than Submission”—tutorial converse for “that is good however repair a number of issues first.” I bought an R&R!

  1. Label normalization edge circumstances. The unique code checked if “PRO” was within the response, however what if the mannequin returns “NOT PRO-IMMIGRATION”? The string “PRO” is in there, however that’s clearly not a pro-immigration classification. Referee 2 recommended utilizing startswith() as a substitute of in, with precise matching as the primary test.

  2. Lacking metrics. Uncooked settlement charge doesn’t account for probability settlement. If each classifiers label 56% of speeches as NEUTRAL, they’ll agree on lots of impartial speeches simply by probability. Referee 2 really useful including Cohen’s Kappa.

  3. Temporal stratification. Speeches from 1880 use completely different language than speeches from 2020. Does gpt-4o-mini perceive Nineteenth-century political rhetoric in addition to trendy speech? Referee 2 recommended analyzing settlement charges individually for pre-1950 and post-1950 speeches.

  4. The immediate design query. Referee 2 echoed my concern in regards to the detailed immediate probably biasing outcomes towards key phrase matching.

  • Clear code construction with one script per activity

  • Defensive programming within the submission script

  • Good logging all through

  • The deck following “Rhetoric of Decks” ideas (extra on this under)

I applied the required fixes. I needed to pause at sure factors the recording, however I believe it in all probability took about half-hour. The code is now extra sturdy than it might have been with out the evaluate.

One factor I’ve discovered from educating: when you can’t clarify what you probably did in slides, you in all probability don’t absolutely perceive it your self.

I requested Claude to create a presentation deck explaining the mission. However I gave it constraints: comply with the “Rhetoric of Decks” philosophy I’ve been growing, which emphasizes:

  • One concept per slide

  • Magnificence is perform (no ornament with out function)

  • The slide serves the spoken phrase (slides are anchors, not paperwork)

  • Narrative arc (Downside → Investigation → Decision)

I’m going to avoid wasting the deck, although, for tomorrow when the outcomes are completed in order that we will all have a look at the deck collectively! Cliff hanger!

As of the second of typing this, the batch has been despatched. Right here’s the place we’re at this second. A few of them are almost finished, and a few have but to start.

However listed below are among the issues I’m questioning as I wait.

  1. Will the LLM agree with the fine-tuned mannequin? The unique paper studies ~65% accuracy for tone classification, with most errors between impartial and the extremes. If gpt-4o-mini achieves comparable settlement, that’s a validation of zero-shot LLM classification. If it’s a lot decrease, we be taught that fine-tuning nonetheless issues.

  2. Will settlement differ by time interval? Will the LLM will do higher on trendy speeches (post-1965) than on Nineteenth-century rhetoric? The coaching knowledge for GPT fashions skews latest, or does it?

  3. Will settlement differ by celebration? If the LLM systematically disagrees with RoBERTa on Republican speeches however not Democratic ones (or vice versa), that tells us one thing about how these fashions encode political language. I can do all this utilizing a transition matrix desk, which I’ll present you, to see how the classifications differ.

  4. What is going to the disagreements appear like? I’m genuinely curious to learn examples the place the 2 classifiers diverge. That’s typically the place you be taught essentially the most.

This began as a check of Claude Code’s capabilities. Can it deal with an actual analysis activity with a number of shifting elements? Can it deal with a “exhausting activity”?

The reply thus far is sure—with caveats. Claude wanted steerage on methodology. It benefited enormously from the Referee 2 evaluate. And I needed to keep engaged all through, asking questions and pushing again on choices. Discover this was not “right here’s a job now go do it”. I’m fairly engaged the entire time, however that’s additionally how I work. I believe I’ll at all times be within the “dialogue so much with Claude Code” camp.

However the workflow labored. We went from “I need to replicate this paper” to “batch job submitted” in about an hour. The code is clear and was double checked (audited) by referee 2. The documentation is thorough. The methodology is defensible. We’re updating a paper. I’m one man in my pajamas filming this complete factor so you may simply see for your self learn how to use Claude Code to do a troublesome activity.

To me, the actual thriller of Claude Code is why does the copy-paste methodology of coding appear to really make me much less attentive, however Claude Code for some motive retains me extra engaged, extra attentive? I nonetheless don’t fairly perceive psychologically why that may be the case however I’ve seen again and again that on initiatives utilizing Claude Code, I don’t have the slippery grasp on what I’ve finished, how I’ve finished it, and in order I typically did with the copy-paste methodology of utilizing ChatGPT to code. That kind of copy-paste is kind of senseless button pushing. Whereas I considering how I take advantage of Claude Code shouldn’t be like that, and therein lies the actual worth. Claude didn’t simply do the work—it did the work in a means that taught me what was taking place. I believe that a minimum of for now’s labor productiveness enhancing. I’m doing new duties I couldn’t do, I’m attending to solutions I can research sooner, I’m considering extra, I’m staying engaged, and curiously, I guess you I’m spending the identical period of time on analysis, however much less time on the stuff that isn’t truly “actual analysis”.

The batch job will take as much as 24 hours to finish. As soon as it’s finished, I’ll obtain the outcomes and run the comparability evaluation.

Half 2 will cowl:

  • General settlement charge and Cohen’s Kappa

  • The transition matrix (which labels does the LLM get “improper”?)

  • Settlement by time interval, celebration, and supply

  • Examples of fascinating disagreements

  • What this implies for researchers contemplating LLM-based textual content classification

Till then, I’m looking at a monitoring file with 39 batch IDs and ready.

Keep tuned.

Technical particulars for the curious:

  • Mannequin: gpt-4o-mini

  • Information: 304,995

  • Estimated value: $10.99 (with 50% batch low cost)

  • Classification labels: PRO-IMMIGRATION, ANTI-IMMIGRATION, NEUTRAL

  • Comparability metric: Settlement charge + Cohen’s Kappa

  • Time stratification: Pre-1950 vs. Submit-1950 (utilizing Congress quantity as proxy)

Repository (unique paper’s replication knowledge):
github.com/dallascard/us-immigration-speeches

Paper quotation:
Card, D., Chang, S., Becker, C., Mendelsohn, J., Voigt, R., Boustan, L., Abramitzky, R., & Jurafsky, D. (2022). Computational evaluation of 140 years of US political speeches reveals extra constructive however more and more polarized framing of immigration.
PNAS, 119(31), e2120510119.

CSS Bar Charts Utilizing Fashionable Features

0


New CSS options can typically make it simpler and extra environment friendly to code designs we already knew find out how to create. This effectivity may stem from diminished code or hacks, or improved readability because of the new options.

In that spirit, let’s revamp what’s underneath the hood of a bar chart.

We start by laying out a grid.

.chart {
  show: grid;
  grid-template-rows: repeat(100, 1fr);
  /* and many others. */
}

The chart metric relies on share, as in “some quantity out of 100.” Let’s say we’re working with a grid containing 100 rows. That should stress check it, proper?

Subsequent, we add the bars to the grid with the grid-column and grid-row properties:

.chart-bar {
  grid-column:  sibling-index();
  grid-row: span attr(data-value quantity);
  /* and many others. */
}

Proper off the bat, I need to be aware a few issues. First is that sibling-index() perform. It’s model new and has incomplete browser assist as of this writing (come on, Firefox!), although it’s presently supported within the newest Chrome and Safari (however not on iOS apparently). Second is that attr() perform. We’ve had it for some time, nevertheless it was just lately upgraded and now accepts data-attributes. So when we've got a kind of in our markup — like data-value="32" — that’s one thing the perform can learn.

With these in place, that’s actually all we have to create a fairly darn good bar chart in vanilla CSS! The next demo has fallbacks in place so to nonetheless see the ultimate end in case your browser hasn’t adopted these new options:

Sure, that was simple to do, nevertheless it’s finest to know precisely why it really works. So, let’s break that down.

Mechanically Establishing Grid Columns

Declaring the sibling-index() perform on the grid-column property explicitly locations the record objects in consecutive columns. I say “specific” as a result of we’re telling the grid precisely the place to position every merchandise by its data-value attribute within the markup. It goes first

  • in first column, second
  • in second column, and so forth.

    That’s the facility of sibling-index() — the grid intelligently generates the order for us with out having to do it manually by way of CSS variables.

    /* First bar: sibling-index() = 1 */
    grid-column: sibling-index();
    
    /* ...leads to: */
    grid-column: 1;
    grid-column-start: 1; grid-column-end: auto;
    
    /* Second bar: sibling-index() = 2 */
    grid-column: sibling-index();
    
    /* ...leads to: */
    grid-column: 2;
    grid-column-start: 2; grid-column-end: auto;
    
    /* and many others. */

    Mechanically Establishing Grid Rows

    It’s just about the identical factor! However on this case, every bar occupies a sure variety of rows based mostly on the proportion it represents. The grid will get these values from the data-value attribute within the markup, successfully telling the grid how tall every bar within the chart ought to be.

    /* First bar: data-value="32" */
    grid-row: span attr(data-value quantity);
    
    /* ...leads to: */
    grid-row: span 32
    
    /* Second bar: data-value="46" */
    grid-row: span attr(data-value quantity);
    
    /* ...leads to: */
    grid-row: span 46

    The attr() perform, when supplied with a information sort parameter (the parameter worth quantity in our case), casts the worth retrieved by attr() into that particular sort. In our instance, the attr() perform returns the worth of data-value as a sort, which is then used to find out the variety of rows to span for every bar.

    Let’s Make Completely different Charts!

    Since we've got the nuts and bolts down on this method, I figured I’d push issues a bit and exhibit how we are able to apply the identical strategies for every kind of CSS-only charts.

    For instance, we are able to use grid-row values to regulate the vertical route of the bars:

    Or we are able to skip bars altogether and use markers as a substitute:

    We will additionally swap the columns and rows for horizontal bar charts:

    Wrapping up

    Fairly thrilling, proper? Simply have a look at all of the methods we used to tug these items off earlier than the times of sibling-index() and an upgraded attr():

  • Whoop wins a key US court docket ruling in opposition to copycat wearables

    0


    What it is advisable to know

    • A U.S. court docket has blocked Lexqi from promoting faceless wearables that carefully copy Whoop’s signature design.
    • The court docket dominated that Whoop’s design has been central to its enterprise and was practically similar in Lexqi’s merchandise.
    • Whereas Whoop’s pricing stays controversial, the ruling reinforces accountability in opposition to counterfeit merchandise.

    Whoop may need simply secured an enormous win in opposition to manufacturers promoting related faceless well being trackers within the U.S.

    Historically, in the case of health bands (to not be confused with smartwatches) Whoop is often the primary identify that involves thoughts. Nonetheless, because of its excessive pricing and the character of the market, a fast search on Amazon reveals a number of manufacturers promoting equally designed merchandise within the U.S.

    In 1916, hybrid automobiles may’ve modified historical past. However Ford would not enable it.

    0


    In October 1914, as fuel automobiles had been tightening their grip on America’s roads, Frank W. Smith, president of the Electrical Automobile Affiliation of America, stood earlier than a conference in Philadelphia and declared victory. Electrical automobiles, he stated, had been “completely and unquestionably the car of the long run, each for enterprise and pleasure.” With mass manufacturing and a wider community of charging stations simply across the nook, “it’s only a matter of time,” he promised, “when the electrically propelled car will predominate.”

    The long run Smith imagined wouldn’t present indicators of life for almost 100 years, but it surely might need come far sooner had America’s industrial leaders stopped treating automotive energy as a binary alternative between gasoline and electrical energy. A compelling different lay in between. Hybrid energy was cleaner and able to guiding transportation by way of a extra climate-friendly century whereas batteries and charging infrastructure matured. However by the point an acceptable hybrid arrived—simply two years after Smith’s proclamation—the world had already dedicated itself to fuel.

    Henry Ford and Thomas Edison tried to affect America’s automobiles

    In 1914, Smith’s optimism appeared justified. All yr, E. G. Liebold, Henry Ford’s influential personal secretary, had been signaling to the press that Ford and Thomas Edison had been teaming as much as construct an affordable electrical automotive. Ford’s son, Edsel, was overseeing manufacturing and the automotive was set to be launched in 1915. 

    With the 2 most well-known industrialists in America—the main car producer and the nation’s most celebrated inventor—becoming a member of forces to mass produce electrical cars, how may electrical automobiles fail? Earlier that yr, Ford and Edison, who had been buddies for greater than a decade, had even bought their very own electrical automobiles from main automotive producer Detroit Electrical to publicly affirm their religion in electrical energy.

    Henry Ford (left) and Thomas Edison (proper) pose with their newly buy electrical automobiles from Detroit Electrical. Picture: Public Area

    The early twentieth century heyday of electrical automobiles

    On the flip of the twentieth century, electrical automobiles had been symbols of refinement and technological progress, well-liked in rich city neighborhoods. Firms like Rauch & Lang, Columbia, Detroit Electrical, and Studebaker constructed electrical automobiles that had been meticulously engineered. They began on the flip of a swap. They had been quiet and provided a clean trip by way of busy metropolis streets. 

    Charging stations appeared in carriage homes, public garages, and even exterior shops. Common Science featured such improvements, together with a three-wheeled electrical automotive designed to “glide by way of the purchasing district” and a “flivverette”—a miniature electrical automotive, sufficiently small to be parked in a “dog-house.” Electrical taxis competed with horse-drawn carriages to ferry passengers by way of dense city cores. In an period when roads had been nonetheless tough and driving was nonetheless novel, electrical cars appeared civilized.

    Gasoline automobiles, in contrast, had been noisy and temperamental. To get them began required muscle to show a stiff crank. They rattled, stalled, and belched exhaust. Early motorists usually carried instruments and spare components, anticipating breakdowns as a part of the journey. 

    A black and white historical photograph of a Bersey Electric Cab, a vintage electric taxi, parked on a London street in front of the Brompton Oratory.

The vehicle features a unique design with a large, enclosed passenger carriage at the rear and an open, elevated driver’s seat at the front. A man wearing a three-piece suit and a bowler hat sits at the steering wheel, which is a vertical column. The cab has four large, light-colored wooden-spoke wheels and two lanterns mounted near the passenger door. In the background, brick pillars with "IN" and "OUT" signs flank a gated entrance, and a large stone building is visible behind a row of trees.
    A photograph taken someday between 1897 and 1900 of an electrical motor cab and driver in London. Vehicles of any type would have been a uncommon website on the time. The cab proven could also be a Bersey electrical cab, launched to London in 1897. They weighed two tons and had a spread of 30 miles earlier than they wanted recharging. They suffered from varied faults and had been taken off the highway in 1900. Picture: Heritage Photos / Contributor / Getty Photos Heritage Photos

    Thomas Edison, like many, believed electrical automobiles would finally prevail over fuel. Obsessive about bettering battery expertise, Edison noticed the electrical car as a pure extension of his life’s work in electrical energy. Despite the fact that he was buddies with Henry Ford, and inspired Ford to develop inside combustion engines, Edison reportedly dismissed fuel automobiles as noisy and foul-smelling, praising electrical energy as cleaner and easier. Within the early years of the car age, the quiet hum of electrical motors, not the explosion of gasoline, appeared inevitable.

    The Ford-Edison electrical automotive that by no means was

    However by 1916, the Ford-Edison electrical automotive nonetheless hadn’t materialized. There was some hypothesis—by no means confirmed—that oil tycoons, like John D. Rockefeller, had persuaded Ford to kill the undertaking, however even with out such strain, electrical automotive expertise simply wasn’t aggressive with fuel. 

    Batteries, which had been predominantly lead-acid or nickel-iron, had been too inefficient, too heavy, and too sluggish to recharge for the form of fast-paced, mass-market automotive world shoppers had been starting to demand. Plus, in 1916, electrical energy was scant exterior cities. 

    Clinton Edgar Woods, the forgotten car inventor behind the primary hybrid automobiles

    However whilst fuel automobiles surged, an engineer named Clinton Edgar Woods provided a special answer. As a substitute of selecting between electrical energy and fuel, he mixed them, creating the primary commercially viable hybrid automobile.

    Right this moment, Woods has largely vanished from well-liked automotive historical past, however he was an vital innovator within the early days of automobiles. Earlier than he launched his hybrid in 1916, Woods had already been on the forefront of electrical automobile design for almost twenty years. In 1899, he launched one of many first electrical automotive corporations, the Woods Motor Automobile Firm. 

    A black-and-white full-length photograph from the early 20th century featuring a man and a woman standing on the deck of a ship.

The man, identified as Clinton Edgar Woods, stands on the right. He has short, light-colored hair and a mustache, wearing a dark three-piece suit with a high-collared white shirt and tie. He holds a light-colored boater hat in his right hand.

The woman, his wife, stands on the left wearing a dark, floor-length skirt and a decorative, embroidered blouse with sheer sleeves. She wears a dark, wide-brimmed hat adorned with a large feather plume, spectacles, and light-colored gloves, holding a small handbag.

They are positioned against a white ship railing, with the blurred industrial structures of a harbor and rigging visible in the background.
    Clinton Edgar Woods and his spouse pose for {a photograph} taken between 1915 and 1920. Picture: Library of Congress / LC-B2- 4845-8 / Public Area

    In 1900, earlier than the Ford Motor Firm even existed and greater than a decade earlier than Smith’s speech, Woods printed The Electrical Vehicle: Its Building, Care, and Operation. It was a consumer handbook grounded in electric-car operational fundamentals, approaching the topic as if electrical energy had been a foregone conclusion. He defined how one can keep batteries, how one can drive effectively, and how one can look after motors. It was not a do-it-yourself information for a fringe expertise; it was a seminal handbook for the automotive future.

    The 1916 debut of Clinton Edgar Woods’s first hybrid automotive

    Common Science introduced Woods’s new hybrid automotive with fascination in 1916. “The facility plan of this distinctive automobile,” the journal defined, “consists of a small gasoline motor and an electric-motor generator mixed in a single unit beneath the hood ahead of the sprint, and a storage battery beneath the rear seats.” Woods named the automotive the Twin Energy, referring to its twin energy sources. Right this moment, we name it a hybrid.

    Woods’s automotive didn’t threaten gasoline’s emergence; it promised to leverage it. The place Ford, Edison, and Smith had been targeted on pure electrical, Woods provided a compromise. His hybrid was designed to protect the class and clean operation of electrical motors whereas conceding the sensible energy and vary that gas provided. His automotive provided dynamic braking with regenerative capabilities, utilizing the motor to sluggish the automotive and recharge its battery, a characteristic that might not be seen in automobiles for one more century. It additionally eradicated the necessity for a clutch, simplifying operation of the fuel engine, identical to an automated transmission. And his design used fuel energy to recharge the batteries, a should the place electrical energy was unavailable. 

    A red and black 1917 Woods Dual Power antique car with white-walled tires, displayed in a museum. The car features a tall, glass-enclosed cabin and a split hood design.
    In 1916, the Woods Motor Automobile Firm debuted the primary commercially viable hybrid car, the Twin Energy (proven right here). Picture: Buch-t / CC BY-SA 3.0 de

    Woods’s hybrid was not the primary dual-powered automotive—that declare possible goes to Ferdinand Porsche, who developed a hybrid in 1900, the Lohner-Porsche Semper Vivus—but it surely was the primary try to construct a mass-producible hybrid. By the point it arrived, nonetheless, the market had already made its alternative. 

    In 1916, Ford alone offered greater than 700,000 fuel automobiles, whereas electrical automotive gross sales collapsed to lower than one % of all automobiles offered, sliding from the chief in 1900 to a mere area of interest. Woods’s Twin Energy automotive was one of many final critical efforts to salvage an electrical future that was slipping away.

    Oil, fuel, and our love affair with inside combustion

    The world didn’t abandon electrical automobiles as a result of they weren’t dependable or well-engineered; it deserted them as a result of gasoline solved fast issues electrical energy couldn’t, mainly velocity, vary, and gas distribution. At a time when the competitors between electrical energy and fuel was at an inflection level, infrastructure sealed the end result. It wasn’t till the Nineteen Thirties that electrical energy started to unfold reliably into rural areas.

    In contrast, even within the early 1900s gasoline may very well be transported in barrels and cans. A gasoline automotive proprietor may discover fuel anyplace from a basic retailer to one of many new fueling stations. Electrical automobiles, then again, had been certain to their city grids, and charging them took for much longer than topping off a fuel tank.

    Woods’s hybrid addressed the recharging limitation, and it provided a lot larger gas effectivity than gas-only automobiles, but it surely was almost 4 occasions the value of a Ford Mannequin T: $2,600 in 1916 (about $79,000 at present) whereas a Mannequin T value $700 (about $21,000 at present). Plus, the Twin Energy’s high velocity was 35 mph in comparison with the Mannequin T’s 45 mph. 

    Had Woods possessed Ford’s mass-production functionality, the value hole might need narrowed. Even so, the hybrid’s inherent complexity would have added value and compromised velocity. And but, such disadvantages might need been overcome, particularly in city settings, had there been the imaginative and prescient and can amongst America’s industrialists. 

    Associated Tales

    The highway not taken

    If we had chosen hybrid designs within the youth of automotive energy, would we have now way back solved the restrictions of electrical automobile expertise and considerably lowered greenhouse fuel emissions? It’s inconceivable to know, however even at present the outlook stays combined. 

    Within the U.S., electrical autos accounted for lower than eight % of the passenger automotive market in 2025, whereas gas-only autos nonetheless made up greater than 75 % of the roughly 16.2 million automobiles offered. Hybrids, in the meantime, have gained steadily—gross sales surged 36 % within the second half of 2025, reaching almost 15 % of all passenger automotive purchases. Globally, electrical automobile gross sales proceed to rise, with greater than 20 million electrified automobiles in 2025, principally in China and Europe. However electrical autos nonetheless symbolize lower than 1 / 4 of all automobiles offered, a determine that exhibits indicators of plateauing.

    As America’s politics swing between wanting ahead to sustainable energy and falling again on our century-long love affair with oil and fuel, the hybrid might but have a task to play in transitioning automotive expertise again to electrical energy—the place it began. 

    Simply as Clinton Edgar Woods noticed the knowledge of mixing the benefits of gasoline and electrical energy, so at present’s hybrids may function a bridge whereas battery expertise and charging infrastructure proceed to mature. In that sense, Woods’s hybrid is greater than a historic footnote; it’s a compass pointing us towards the highway not taken.

    In A Century in Movement, Common Science revisits fascinating transportation tales from our archives, from hybrid automobiles to transferring sidewalks, and explores how these innovations are re-emerging at present in stunning methods.

     

    products on a page that says best of what's new 2025

    2025 PopSci Better of What’s New

     

    Invoice Gourgey is a Common Science contributor and unofficial digital archeologist who enjoys excavating PopSci’s huge archives to replace noteworthy tales (sure, merry-go-rounds are noteworthy).


    The Domains and Organizational Capabilities of AI Safety

    0


    When your CISO mentions “AI safety” within the subsequent board assembly, what precisely do they imply? Are they speaking about defending your AI methods from assaults? Utilizing AI to catch hackers? Stopping staff from leaking information to an unapproved AI service? Guaranteeing your AI doesn’t produce dangerous outputs?

    The reply is likely to be “the entire above”; and that’s exactly the issue.

    AI grew to become deeply embedded in enterprise operations. In consequence, the intersection of “AI” and “safety” has change into more and more complicated and complicated. The identical phrases are used to explain essentially completely different domains with distinct aims, resulting in miscommunication that may derail safety methods, misallocate assets, and go away important gaps in safety. We want a shared understanding and shared language.

    Jason Lish (Cisco’s Chief Data Safety Officer) and Larry Lidz (Cisco’s VP of Software program Safety) co-authored this paper with me to assist deal with this problem head-on. Collectively, we introduce a five-domain taxonomy designed to convey readability to AI safety conversations throughout enterprise operations.

    The Communication Problem

    Contemplate this situation: your govt group asks you to current the corporate’s “AI safety technique” on the subsequent board assembly. And not using a widespread framework, every stakeholder might stroll into that dialog with a really completely different interpretation of what’s being requested. Is the board asking about:

    • Defending your AI fashions from adversarial assaults?
    • Utilizing AI to reinforce your risk detection?
    • Stopping information leakage to exterior AI companies?
    • Offering guardrails for AI output security?
    • Guaranteeing regulatory compliance for AI methods?
    • Defending in opposition to AI-enabled or AI-generated cyber threats? This ambiguity results in very actual organizational issues, together with:
    • Miscommunication in govt and board discussions
    • Misaligned vendor evaluations— evaluating apples to oranges
    • Fragmented safety methods with harmful gaps
    • Useful resource misallocation specializing in the unsuitable aims

    And not using a shared framework, organizations wrestle to precisely assess dangers, assign accountability, and implement complete, coherent AI safety methods.

    The 5 Domains of AI Safety

    We suggest a framework that organizes the AI-security panorama into 5 clear, deliberately distinct domains. Every addresses completely different considerations, includes completely different risk actors, requires completely different controls, and usually falls beneath completely different organizational possession. The domains are:

    • Securing AI
    • AI for Safety
    • AI Governance
    • AI Security
    • Accountable AI

    Every area addresses a definite class of dangerous and is designed for use along side the others to create a complete AI technique.

    These 5 domains don’t exist in isolation; they reinforce and rely on each other and should be deliberately aligned. Study extra about every area within the paper, which is meant as a place to begin for trade dialogue, not a prescriptive guidelines. Organizations are inspired to adapt and lengthen the taxonomy to their particular contexts whereas preserving the core distinctions between domains.

    Framework Alignment

    Simply because the NIST Cybersecurity Framework supplies a standard language to speak concerning the domains of cybersecurity whereas not eradicating the necessity for detailed cybersecurity framework reminiscent of NIST SP 800-53 and ISO 27001, this taxonomy will not be meant to work in isolation of extra detailed frameworks, however somewhat to supply widespread vocabulary throughout trade.

    As such, the paper builds on Cisco’s Built-in AI Safety and Security Framework not too long ago launched by my colleague Amy Chang. It additionally aligns with established trade frameworks, such because the Coalition for Safe AI (CoSAI) Threat Map, MITRE ATLAS, and others.

    The intersection of AI and safety will not be a single downside to unravel, however a constellation of distinct threat domains; every requiring completely different experience, controls, and organizational possession. By aligning with these domains with organizational context, organizations can:

    • Talk exactly about AI safety considerations with out ambiguity
    • Assess threat comprehensively throughout all related domains
    • Assign accountability clearly to the precise groups
    • Make investments strategically somewhat than reactively

    The MANGMI Pocket Max consists of free controller modules for a restricted time

    0


    TL;DR

    • Just a few hours from launch, MANGMI has added free microswitch modules for early consumers of the Pocket Max handheld.
    • The system contains a 7-inch OLED display screen, Snapdragon 865, and swappable management modules.
    • Tremendous Early Fowl pricing is $199, but it surely faces stiff competitors from the AYN Odin 2 Portal.

    MANGMI is lower than a day away from launching the most cost effective 7-inch gaming handheld in the marketplace, the Pocket Max, however the deal simply turned just a little sweeter. As we speak, the corporate introduced that each one Tremendous Early Fowl orders may even embody a free set of microswitch modules.

    The modular nature of the MANGMI Pocket Max is one in every of its most unusual and complicated options. The system features a set of soppy membrane modules for the D-pad and buttons within the field, with further clicky microswitch modules bought individually for $15 (beforehand $12). The modules can’t be moved to, say, put the D-pad above the sticks, however some avid gamers are very choosy about their inputs.

    The additional modules make it a extra enticing deal, however the AYN Odin 2 Portal low cost nonetheless looms massive.

    Other than the modules, the killer characteristic of the Pocket Max is the 7-inch 144Hz AMOLED show, which is among the many largest on any Android gaming handheld. This may even be the most cost effective 7-inch handheld in the marketplace, even after the early fowl interval ends and it’s as much as its $240 retail worth.

    Powering the system is a Snapdragon 865, which is an older chipset utilized in gaming handhelds just like the Retroid Pocket Flip 2 and Retroid Pocket 5. Regardless of its age, it’s nonetheless a really succesful and well-tested chipset for emulation, Android gaming, and recreation streaming.

    Don’t wish to miss the most effective from Android Authority?

    google preferred source badge light@2xgoogle preferred source badge dark@2x

    That mentioned, it faces stiff competitors from the AYN Odin 2 Portal, which is at the moment on sale for $250. With a Snapdragon 8 Gen 2, it has much more uncooked horsepower that unlocks extra dependable PC and Change emulation.

    This unbelievable deal has tempered fan reactions to the Pocket Max, which in any other case can be a unbelievable system at this worth. This may occasionally clarify why the corporate determined to alter course and embody the additional modules for early consumers, somewhat than preserving them as an upsell.

    The MANGMI Pocket Max will go on sale at the moment at 6PM PST on the official web site. The Tremendous Early Fowl worth of $199 (with free modules) will stay in impact till February 12. Orders after that received’t ship till the Chinese language New 12 months holidays finish.

    Thanks for being a part of our group. Learn our Remark Coverage earlier than posting.

    I Did not Take care of Dildos Till I Tried This One From Lelo

    0


    My first intercourse toy was a brilliant blue dildo. I used to be about 19, and as a university pupil in New Hampshire, I did what anybody in my place would do: hotfooted it to the closest metropolis to discover a intercourse store. With my greatest buddy in tow, I jetted out and in of the Condom World on Newbury Road so quick that I solely had time to seize the first intercourse toy I noticed and pay for it, maintaining my head down the entire time. To say the early 2000s had been completely different when it got here to intercourse toy acceptance could be a gross understatement.

    In my small-town thoughts, they had been one thing that shouldn’t be mentioned and the very definition of the phrase taboo. However they weren’t taboo sufficient to maintain me from shopping for that battery-operated blue monstrosity with its exaggerated veins and head. I used to be embarrassed by it from day one, much more so after I used it, with zero understanding of what I used to be imagined to get out of it.

    With Age Got here Knowledge

    As I bought to know my physique higher, exploring alone or throughout partnered intercourse, with different toys I bought, I got here to a realization. As a lot as I like penetration with my associate(s), I did not significantly get pleasure from dildos. I discovered most impersonal, poorly designed, and since it wouldn’t be till my late 30s that I’d lastly have an orgasm from G-spot simulation, on a pleasure degree, they weren’t for me. This didn’t cease me from attempting to get one thing out of them, particularly as I began writing about sexual wellness and all of the sudden had a world of intercourse toys and the latest improvements on the tip of my fingers.

    My emotions on dildos modified drastically with the Lelo Gigi 3. Lastly, an inner dildo with a head that was flattened, versus cone-shaped, which means it lined extra of the G-spot space. Breaking information, that is precisely what I wanted in any case this time. Whereas so many different dildos have rounded heads, which many individuals love, the flat head of the Gigi 3 is what actually units it aside.

    It’s not simply the form of the pinnacle, however the way in which it disperses stimulation. As a result of the eight vibration modes are delivered by the flattened head, the sensations are extra intense and rumbly, which means I can really feel them branching out and reaching extra nerve endings. Individuals with vulvas are far too typically taught that in the case of sexual pleasure, the main focus ought to be on the clitoris or the G-spot—or each without delay—however the actuality is that the whole area is chock-full of nerve endings. Therefore, the rationale individuals with vulvas can expertise orgasms outdoors these two zones, like by way of the A-spot (anterior fornix) and the U-spot (urethral sponge).

    When Measurement Issues

    I’m not a measurement queen. I’m the primary to confess that smaller is best when it involves exterior vibrators. Nonetheless, in the case of inner vibrators, the scale and form of the shaft matter. The Gigi 3 is the perfect measurement for G-spot stimulation. Even those that have but to formally discover their G-spot can’t probably miss it when utilizing the Gigi 3 as a result of the size and slight arc of the shaft places that flattened head precisely the place each particular person with a vulva needs it. For me, I like to put a number of drops of water-based lube on the pinnacle and shaft of the Gigi 3, get myself cozy, then slide it inside. From there, I can soften into the second with out fumbling with buttons (the Gigi 3 is app-controlled).

    After I’m not within the temper for inner play, the flat head of the Gigi 3 is nice for direct clitoral stimulation or teasing different elements of my vulva. As I’ve discovered, simply because the labia doesn’t have as many nerve endings because the clitoris doesn’t imply it likes to be ignored.

    Is the Lelo Gigi 3 my favourite intercourse toy? No. Relying on the day and my temper, my favourite intercourse toys and vibrators change. However, when you’re somebody with a vulva who has by no means appreciated dildos, the Gigi 3 might be your ticket to sexual pleasure. It may additionally hit the spot when you choose clitoral stimulation, however need the vibrations to embody extra than simply the exterior tip of the clit. Though the Gigi 3 can be utilized wherever, internally and externally, when you have a vulva, this can be a must-have.