Friday, February 27, 2026
Home Blog

Nano Banana 2 is Right here! Smaller, Quicker, Cheaper

0


Nano Banana! The picture mannequin that took the world by storm simply acquired eclipsed by…itself. Sure! Google did it once more. After establishing requirements by their launch of Nano banana, they’re again with its excessive anticipated follow-up: Nano Banana 2 (formally designated as Gemini 3.1 Flash Picture).

This new mannequin bridges the hole between studio-quality artistic management and fast technology speeds. By merging the standard outputs of the Professional tier with the effectivity of the Flash structure, Nano Banana 2 goals to make enterprise-grade picture technology doable. 

We’ll speak about its options and check Nano Banana 2’s efficiency on real-world duties. 

Nano Banana 2: What’s new?

Velocity with High quality can be the correct phrase to explain this new mannequin. Nano Banana 2 eliminates the trade-off between fast rendering and studio-grade precision, providing a zero-compromise answer for visible workflows. 

The place Nano Banana excelled at creating visually compelling pictures, the time it took to make one left rather a lot to be desired. Google dealt with this drawback head on, by streamlining the structure to cut back latency, permitting creators to quickly iterate and refine pictures by prompts with out the lengthy wait occasions. This strategy (based mostly on my assessments) proved to be fruitful. 

Key Options 

Nano Banana 2 isn’t only a velocity improve, it brings heavy-hitting capabilities to on a regular basis workflows:

  • Actual-Time Internet Grounding: The mannequin pulls dwell info and reference pictures from Google Search to precisely render real-world topics, places, and knowledge.
  • Superior Textual content Rendering and Localization: It will probably generate crisp, extremely correct textual content inside pictures (excellent for posters or mockups) and uniquely permits customers to translate that textual content into completely different languages immediately inside the picture.
  • Topic Consistency: The mannequin can preserve the precise likeness of as much as 5 completely different characters and 14 objects all through a single workflow, making it a viable instrument for complicated storyboarding.
  • Excessive-Constancy Codecs: It helps native technology throughout a large number of side ratios (together with 16:9, 9:16, and ultra-wide codecs) and resolutions starting from a latency-optimized 512px as much as gorgeous 4K upscaling.

You’ll be able to entry Nano Banana 2 utilizing Gemini’s Internet Interface. Nano Banana 2 is free to make use of for all customers.

Let’s Attempt Nano Banana 2 

Listed here are 5 sensible methods to place it by its paces:

Job 1: The Infographic Check Immediate

“Generate a top-down, flat-lay infographic explaining the photo voltaic vitality cycle. Guarantee there’s a logical visible move and completely zero spelling errors within the textual content labels.” 

Response: 

A beautify infographic that’s devoid of spelling errors and completely solutions the question, whereas adjusting the picture to the theme it was making an attempt to painting (constructive inexperienced vibes). 

Job 2: In-Picture Translation 

“Generate a contemporary commercial mockup for a glossy pair of headphones that includes the English textual content ‘Really feel The Bass’.” 

Response:

Image mockup using nano banana 2

A convincing commercial mockup. However let’s spice issues up a bit. 

Immediate 2 (Followup): “Localize this visible by translating the textual content into Japanese (‘低音を感じろ’) with out altering the underlying picture composition or lighting.” 

Response:

Japanese language in Image mockup

{The japanese} textual content within the picture appeared fairly convincing. However since I can’t learn japanese, I used google translate to substantiate whether or not the interpretation was appropriate or not. 

Testing translation accuracy

Wow. That is virtually a 100% match from the unique textual content! That is superb as a result of the Nano Banana 2 was not solely in a position to translate and paste {the japanese} textual content in a convincing method, but in addition took care of the correctness of the interpretation. 

Multilingual graphic designs would profit rather a lot from this functionality of Nano Banana 2. 

Job 3: Character Storyboarding Immediate

“Generate a particular character design of a cyberpunk detective carrying a glowing blue visor. Utilizing this precise character, generate three new pictures inserting them in totally completely different environments: a bustling neon evening market, a sterile white company foyer, and a dusty wasteland.” 

Response: 

Storyboard using nano banana 2

Cyberpunk 2077 Vibes. The character design of the detective was constant throughout the environments. The patches on the coat, the scar on the brow, and lots of extra particulars are constantly relayed throughout the completely different scenes. 

That is one thing that Nano Banana struggled with. 

Job 4: Semantic Pure Language Enhancing

TLOU Screenshot

“Take this picture and alter the ambiance to that of Circus. Swap the clothes with formal fits. Make it as in the event that they’re in a capturing gallery. Don’t change the topic or the place they’re positioned within the picture.”

Response: 

AI Enhanced image by nano banana 2

Impressed. The character weren’t moved in any respect. And their attires had been modified based on the instruction in a pure method. Nano Banana 2 was in a position to do a revamp on this picture, with out altering a lot of its visible parts just like the water truck, tree (in addition to the lady) and the chimney smoke. 

Job 5: Dwell Climate Grounding Immediate

“Generate a photorealistic view from a comfy condo window trying immediately on the Eiffel Tower, precisely reflecting in the present day’s precise dwell climate situations in Paris.” 

Response: 

Live location image using nano banana 2

This picture appears like a high-quality instagram picture. When you had been to inform your pal that it is a snap you took when you had been in paris, chances are high —ahm.. Watermark— they’re gonna imagine you. The time and climate situations of paris as of the time this check was performed had been 12:52 am and cloudy climate. This scene completely captures these particulars.

Nano Banana 2 Evaluate

Whereas utilizing Nano Banana 2, I made the next observations:

  1. Specificity: Nano Banana 2 is an unimaginable instrument, in case you can ask it correctly. Ask and also you shall get! The outdated adage epitomizes the expertise I’ve had utilizing it. The extra clear your prompts are, the higher the output picture high quality. 
  2. Guardrails: Of all its mightiness, Nano Banana 2 would outright reject picture request that comprises a copyrightable entity within the immediate. Keep away from utilizing firm names or one thing particular that may set off this guardrail.
Copyright problems with nano banana 2
Similar immediate as process 4 with Circus changed by DisneyLand
  1. Limitations: Nano Banana 2 is accessible without spending a dime to all customers of Gemini mannequin. However some options are locked behind subscriptions and free utilization results in restrict fairly simply.
Nano banana 2 features limited to gemini pro account

Efficiency and Benchmarks

Whereas precise benchmark numbers are nonetheless stabilizing, early reception locations Nano Banana 2 on the very prime of visible AI leaderboards like Enviornment.ai

It outperforms its predecessor, Gemini 2.5 Flash Picture, by drastically decreasing latency whereas matching the instruction-following and immediate adherence beforehand reserved just for the heavier Gemini 3.1 Professional Picture mannequin. 

Conclusion 

Nano Banana 2 represents a serious shift in how we strategy AI picture technology. By eradicating the friction of lengthy rendering occasions whereas holding superior options like excellent textual content technology and character consistency, it’s successfully ending the compromise between velocity and high quality.

Whether or not you’re an enterprise marketer localizing world advert campaigns or a creator constructing a comic book e-book, Nano Banana 2 is positioned to turn into the brand new default normal for visible technology.

Regularly Requested Questions

Q1. What makes Nano Banana 2 completely different from its predecessor?

A. It combines Flash-level velocity with Professional-quality picture technology, eradicating the standard trade-off.

Q2. Can Nano Banana 2 translate textual content inside generated pictures?

A. Sure, it helps in-image textual content translation with out altering composition or lighting.

Q3. What number of topics can Nano Banana 2 preserve constant?

A. It maintains consistency for as much as 5 characters and 14 objects throughout workflows.

I concentrate on reviewing and refining AI-driven analysis, technical documentation, and content material associated to rising AI applied sciences. My expertise spans AI mannequin coaching, knowledge evaluation, and knowledge retrieval, permitting me to craft content material that’s each technically correct and accessible.

Login to proceed studying and revel in expert-curated content material.

Discovering worth with AI and Business 5.0 transformation


“To comprehend the promise of Business 5.0, firms should transfer past value and effectivity to deal with progress, resilience, and human-centric outcomes,” says Sachin Lulla, EY Americas industrials and power transformation chief. “This requires not simply new applied sciences, however new methods of working—the place folks and machines collaborate, and the place worth is measured not simply in {dollars} saved, however in new alternatives created.”

An MIT Know-how Assessment Insights survey of 250 business leaders from world wide reveals most industrial investments nonetheless goal effectivity. And whereas the info exhibits human-centric and sustainable use circumstances ship greater worth, they’re underfunded. The analysis exhibits most organizations aren’t realizing the total worth potential of Business 5.0 on account of a mix of:

• Tradition, abilities, and collaboration boundaries.
• Tactical and misaligned expertise investments.
• Use-case prioritization centered on effectivity over progress, sustainability, and well-being.

The barrier to attaining Business 5.0 transformation shouldn’t be solely about fixing the expertise, in line with analysis from EY and Saïd Enterprise Faculty on the College of Oxford, it’s also about bolstering human-centric components like technique, tradition, and management. Firms are investing closely in digital transformation, however not all the time in ways in which unlock the total human potential of Business 5.0.

“We’re not simply doing digital work for work’s sake, what I name ‘chasing the digital fairies,’” says Chris Ware, common supervisor, iron ore digital, Rio Tinto. “We now have to be very clear on what items of labor we go after and why. Each area has a novel roadmap about the right way to ship the perfect worth.”

Obtain the total report.

This content material was produced by Insights, the customized content material arm of MIT Know-how Assessment. It was not written by MIT Know-how Assessment’s editorial workers. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This contains the writing of surveys and assortment of information for surveys. AI instruments that will have been used had been restricted to secondary manufacturing processes that handed thorough human evaluation.

Apple’s low-cost MacBook could skip some options you gained’t miss

0


Stem cell patch reverses mind injury in fetuses with spina bifida

0


A false-colour X-ray displaying a big neural tube defect (crimson) on each side of the decrease again in somebody with spina bifida

SCIENCE PHOTO LIBRARY

A patch manufactured from stem cells from donor placentas has been used to deal with fetuses within the womb with a extreme type of spina bifida as a part of a world-first trial. The novel strategy appears to have reversed a mind complication related to the congenital situation a minimum of as successfully because the go-to remedy, however is anticipated to allow extra youngsters to stroll over the long run.

The mom of one of many infants, who’s now 4 years outdated, says she anticipated that her son Toby would require a wheelchair when he was identified with the situation within the womb. “However Toby is wholesome [and] has hit all of his milestones – he’s strolling, working and leaping – and has no issues with bladder management, which is uncommon for individuals with the situation,” she says.

Spina bifida – which impacts about 1 in each 2800 births within the US yearly – happens when a child’s backbone and spinal twine don’t totally develop within the womb. In probably the most extreme type of the situation, known as myelomeningocele, the spinal twine and its surrounding tissue protrude out of a niche within the vertebrae, which frequently impairs mobility and bowel and bladder management. The reason for spina bifida is unknown, however folic acid deficiency throughout being pregnant raises the chance.

One of many commonplace remedies includes surgical procedure within the womb that tucks the spinal twine and the encompassing tissue again into the vertebrae, earlier than stitching up the pores and skin to type a decent seal. “However many youngsters nonetheless find yourself unable to stroll and there’s [usually] no enchancment in bowel or bladder management,” says Diana Farmer on the College of California, Davis.

This led Farmer and her colleagues to surprise if the addition of stem cells might assist by selling the expansion and restore of spinal tissue. To seek out out, they recruited six pregnant girls carrying fetuses with myelomeningocele.

By about 24 weeks’ gestation, all of the fetuses had developed a typical complication known as hindbrain herniation, the place an excessive amount of fluid builds up within the cranium, pushing the underside of the mind, the cerebellum, by way of a gap on the cranium’s base. The usual surgical procedure usually helps to reverse hindbrain herniation, however many youngsters nonetheless have problems.

Within the newest trial, all of the fetuses underwent the usual surgical procedure but in addition acquired a patch, measuring a couple of centimetres lengthy, that contained stem cells derived from donated placentas that had been embedded in a matrix of sticky proteins. Surgeons positioned this patch on the backbone earlier than the pores and skin was sewn round it. “The cells secrete their magic stem cell juice,” says Farmer.

At delivery, the surgical procedure web site had healed effectively in all the infants, with no indicators of irregular cell progress. “A key fear was that including stem cells in a fetus would make the cells develop like loopy, however we didn’t see that,” says Farmer. MRI scans of their brains additionally confirmed that the remedy utterly reversed hindbrain herniation.

“My private opinion is that this may enhance long-term outcomes in comparison with the usual strategy [based on evidence from animal studies],” says Panicos Shangaris at King’s School London.

The researchers hope to evaluate this in a trial the place 35 fetuses with myelomeningocele will obtain the stem-cell patch, and their outcomes shall be in contrast towards a earlier examine that used the standard surgical procedure, says Farmer.

However Shangaris says that a greater comparability, which is extra prone to result in the remedy being authorised, can be to check the 2 approaches in a head-to-head trial that assesses their security and efficacy on fetuses who had been randomly assigned to every intervention.

Matters:

Learnings from COBOL modernization in the true world

0


There’s a whole lot of pleasure proper now about AI enabling mainframe utility modernization. Boards are paying consideration. CIOs are getting requested for a plan. AI is a real accelerator for COBOL modernization however to get outcomes, AI wants further context that supply code alone can’t present.Right here’s what we’ve realized working with 400+ enterprise clients: mainframe modernization has two very completely different halves. The primary half is reverse engineering, understanding what your current programs truly do. The second half is ahead engineering, constructing the brand new functions.

The primary half is the place mainframe initiatives reside or die. Nevertheless, coding assistants are genuinely good at solely the second half. Give them a transparent, validated spec they usually’ll construct fashionable functions quick.

We’ve got realized that delivering profitable COBOL modernization requires an answer that may reverse engineer deterministically, produce validated and traceable specs, and assist these specs circulate into any AI-powered coding assistant for the ahead engineering. A profitable modernization requires each reverse engineering and ahead engineering.

What a profitable mainframe modernization requires

Bounded, full context

Mainframe functions are massive. Actually massive. A single program can run tens of 1000’s of strains, pulling in shared knowledge definitions from throughout the system, calling different applications, orchestrated via JCL that spans all the panorama. In the present day, AI can solely course of a restricted quantity of code at a time. Feed it one program and it could possibly’t see the copybooks, the referred to as subroutines, the shared information, or the JCL that ties all the pieces collectively. It’ll produce output that appears affordable for the code it could possibly see however miss dependencies it was by no means proven. In working with clients, we clear up this by extracting all implicit dependencies deterministically first, then feeding AI bounded, full items with all the pieces it wants already resolved. That manner AI focuses on what it’s nice at (understanding enterprise logic, producing specs) as an alternative of guessing at connections it could possibly’t see.

Platform-aware context

Right here’s one thing that surprises folks: the identical COBOL supply code behaves in another way relying on the compiler and runtime. How numbers get rounded, how knowledge sits in reminiscence, how applications speak to middleware. These aren’t within the supply code. They’re decided by the particular compiler and runtime atmosphere the code was constructed for. Many years of hardware-software integration can’t be replicated by merely shifting code. We discovered that AI does its finest work when platform-specific conduct has already been resolved. Feed AI clear, platform-aware enter, and it delivers. Feed it uncooked supply code, and it’ll generate output that appears proper however behaves in another way than the unique. In monetary programs, a rounding distinction isn’t a beauty subject. It’s a cloth error.

A traceable basis

If you happen to’re in banking, insurance coverage, or authorities, your regulators will ask one query: are you able to show you didn’t miss something? AI by itself isn’t sufficient to extract enterprise logic and generate documentation that regulators will settle for. Regulatory compliance requires each output to have a proper, auditable connection again to the unique system. We realized early that traceability doesn’t come from AI studying supply code. It comes from structuring the code into exact, bounded items so we all know precisely what goes into the AI and may hint each output again to its supply. For purchasers in regulated industries, that is typically the distinction between a challenge that strikes ahead and one which stalls.

How we set AI up for achievement in AWS Remodel

We constructed AWS Remodel to modernize mainframe functions at scale. The concept is easy: give AI the correct basis, and clients get traceable, right, and full outcomes they will take to manufacturing. AWS Remodel begins by constructing an entire, deterministic mannequin of the applying. Specialised brokers extract code construction, runtime conduct, and knowledge relationships throughout all the system — not one program at a time, however the entire panorama. This produces a dependency graph aligned with the precise compiler semantics, capturing cross-program dependencies, middleware interactions, and platform-specific conduct earlier than AI will get concerned. From there, massive applications get decomposed into bounded, processable, items. Platform-specific conduct is resolved deterministically. The items are sized for AI to course of successfully. Then AI extracts enterprise logic in pure language, and each output will get validated towards the deterministic proof we’ve already extracted. Specs map again to the unique code. When a regulator asks “did you miss something?”, there’s a verifiable reply. What units this aside is that AI by no means operates at midnight. Each unit it processes has recognized inputs and anticipated outputs, so we will validate what comes again. No different strategy in the marketplace closes that loop. What comes out is a set of validated, traceable technical specs that plug into any fashionable improvement atmosphere. The exhausting a part of modernization is knowing what exists at present. When you’ve captured that in exact specs, AI-powered IDEs can construct the brand new utility with confidence.

An end-to-end platform for enterprise transformation

No one modernizes one app. Our clients are watching portfolios of a whole bunch or 1000’s of interconnected functions, they usually want far more than evaluation assist. AWS Remodel automates throughout the total lifecycle: evaluation, check planning, refactoring, reimagination. The entire thing. And inside that, completely different apps want completely different paths. Some get re-imagined from scratch. Some simply want a clear, deterministic conversion to Java. Some have to get out of the info middle first and modernize later. Some will stay on the mainframe. We realized the exhausting manner that treating all of them the identical is how initiatives blow up. The portfolio resolution (which app, which path, what order) issues as a lot because the tech. In our expertise, that is the one manner enterprise modernization truly finishes. One-size-fits-all approaches are why these initiatives fail. Yet one more factor that will get missed always: check knowledge. You’ll be able to’t show the modernized app works with out actual manufacturing knowledge and actual situations. We’ve watched groups get all through code conversion after which stall as a result of no one deliberate for knowledge seize. So, we constructed check planning and on-prem knowledge seize into the platform from day one. Not a cleanup train on the finish. That’s what this truly seems to be like when it really works. Finish-to-end automation, the correct path for every app, validation baked in.

The way to get this proper

The query isn’t “ought to we use AI for COBOL modernization?” In fact you must. The query is the way you set AI as much as ship: traceability for regulators, platform-specific conduct dealt with appropriately, consistency throughout your utility portfolio, and the power to scale to a whole bunch of interconnected applications. That’s what we discovered constructing AWS Remodel. Deterministic evaluation as the inspiration. AI because the accelerator. An AWS service that covers the total vary of modernization patterns.

And it’s working.

BMW Group decreased testing time by 75% and elevated check protection by 60%, considerably decreasing threat whereas accelerating modernization timelines.

Fiserv accomplished a mainframe modernization challenge that might have taken 29+ months in simply 17 months.

Itau reduce mainframe utility discovery time and testing time by greater than 90%, enabling groups to modernize functions 75% sooner than with earlier guide efforts.


Concerning the authors

Dr. Asa Kalavade

Asa leads AWS Remodel, serving to clients migrate and modernize their infrastructure, functions, and code. Beforehand, she led the AWS go-to-market instruments transformation, incorporating generative AI capabilities. She additionally managed hybrid storage and knowledge switch companies. Earlier than becoming a member of AWS in 2016, Asa based two venture-backed startups and stays energetic in mentoring Boston startups. She holds a PhD in electrical engineering and laptop science from UC Berkeley and greater than 40 patents.

One of the best new options of C# 14

0


Accessible as part of .NET 10, which was launched final November, C# 14 brings a plethora of latest options and enhancements that make it simpler to put in writing environment friendly, excessive performant code. Simply as we walked by way of the brand new options and enhancements in C# 13 and C# 12, on this article we’ll take a detailed have a look at a few of the finest new options in C# 14.

To work with the code examples supplied on this article, you must have Visible Studio 2026 or a later model put in in your system. If you happen to don’t have already got a duplicate, you’ll be able to obtain Visible Studio 2026 right here.

File-based apps

Help for file-based apps is maybe essentially the most putting new characteristic on this launch of the C# programming language. Till C# 14, we’ve needed to observe a multi-step course of to run a minimal .cs file. Not solely was this a multi-step course of, but it surely incurred important extra overhead since you needed to create an answer file and a undertaking file to run your utility. Even when all you needed to do was carry out a fast calculation or course of a chunk of information rapidly to check your code, you needed to create extra information chances are you’ll not want later. Now not.

With C# 14, now you’ll be able to run a C# file instantly from the command line with no need a undertaking or resolution file.

Allow us to perceive this with a code instance. Contemplate a file named Demo.cs that incorporates the next code.

Console.WriteLine("This can be a pattern textual content");
DateTime dateTime = DateTime.UtcNow.Date;
Console.WriteLine($"Immediately's date is: {dateTime.ToString("d")}");

You’ll be able to execute this system utilizing the next command on the console window.

dotnet run Demo.cs

When this system is executed, you’ll see the next textual content displayed on the console.

Foundry

Word which you could create file-based apps that reference NuGet packages and SDKs utilizing preprocessor directives, with no need a undertaking or resolution file.

Extension members

Extension members are a brand new characteristic in C# 14 that allow you to declare extension properties in addition to extension strategies. As well as, extension members make it simpler to declare extension strategies than in earlier variations of C#. Earlier than we dive into extension members, let’s first perceive extension strategies.

Within the C# programming language, extension strategies are a characteristic that allows you to increase the capabilities of lessons with out the need of inheritance. You don’t want to create subclasses to make use of extension strategies, neither is it crucial to switch or recompile current class definitions. Along with bettering code readability, extension strategies enable you to add new strategies to your current sorts (i.e., lessons, structs, data, or interfaces). By the way, extension strategies had been first applied in C# 3.0.

There are quite a few extension strategies in .NET that can help you increase the querying capabilities of each System.Collections.IEnumerable and System.Collections.Generic.IEnumerable by utilizing the LINQ customary question operator. When you can make the most of extension strategies to increase a category or an interface in C#, you can not override their strategies. Extension strategies might help you to increase the performance of sorts even when they’re sealed, such because the String class in C#.

For instance, the the place() extension technique is outlined within the Enumerable static class pertaining to the System.Linq namespace. The next code snippet creates an occasion of the the place() extension technique:

public static IEnumerable The place(
    this IEnumerable supply, 
    Func predicate)

Word the usage of the this key phrase. Previous to C# 14, to implement an extension technique, you needed to create a static technique and cross the this reference as a parameter to the strategy. In C# 14, the code snippet above might be changed utilizing an extension block, with out the necessity of specifying the this parameter. That is proven within the code snippet given under.

extension(IEnumerable supply)
{
    public IEnumerable 
     The place(Func predicate)
}

The flexibility to outline extension members has different benefits as effectively. Word that an extension member requires two sorts of data, i.e., the receiver to which the member ought to be utilized and any parameters it’d want if the member is a technique. With the brand new extension member syntax, you’ll be able to outline an extension block after which write the receivers as wanted. Most significantly, this new syntax lets you outline a receiver on your extension member that doesn’t require any parameter, i.e., in the event you’re utilizing an extension property. 

Moreover, by utilizing the brand new syntax, you’ll be able to logically group extensions that apply to the identical receiver. You’ll be able to then outline a brand new extension block if the receiver adjustments. Furthermore, the static class through which you write your extension blocks or extension strategies (in the event you’re utilizing an earlier model of the C# language) can include each the extension strategies that require the this parameter and the extension members grouped inside extension blocks, as proven within the C# 14 code itemizing given under.

public static class StringExtensions
{
    extension(string worth)
    {
        public bool ContainsAnyDigit()
        {
            if (string.IsNullOrEmpty(worth))
                return false;
            return worth.Any(char.IsDigit);
        }
        public bool ContainsAnySpecialCharacter()
        {
            if (string.IsNullOrEmpty(worth))
                return false;
            return worth.Any(c => !char.IsLetterOrDigit(c));
        }
    }

    public static bool IsNullOrEmptyOrWhiteSpace(this string str)
    {
        return string.IsNullOrWhiteSpace(str);
    }
}

Within the previous code snippet, the extension technique IsNullOrEmptyOrWhiteSpace makes use of the legacy syntax (i.e., it requires the this parameter), whereas the extension strategies ContainsAnyDigit and ContainsAnySpecialCharacter use the brand new syntax.

You’ll be able to learn extra about extension members in C# 14 right here.

Enhancements to the nameof operator for unbound generics

C# 14 brings enhancements to the nameof key phrase by supporting unbound generic sorts (e.g., RecordDictionary). Now that nameof can take an unbound generic kind as an argument, you now not must outline dummy kind arguments (equivalent to Record) merely to acquire the sort title “Record.”

Allow us to perceive this with a code instance. Within the following piece of code, you’ll must specify the sort argument for the solid to work completely.

string typeNameList = nameof(Record);
string typeNameDictionary = nameof(Dictionary);

With C# 14, unbound generics work instantly. You now not must specify the sort explicitly, as proven within the code snippet given under.

string typeNameList = nameof(Record);
string typeNameDictionary = nameof(Dictionary);

Therefore, with C# 14, the next strains of code will work completely.

Console.WriteLine(nameof(Record));
Console.WriteLine(nameof(Dictionary));

Person-defined compound project operators

C# 14 comes with assist for compound project operators. This characteristic lets you write code much like x += y as a substitute of getting to put in writing x = x + y, as you do within the earlier variations of the language. You should utilize compound project operators in C# 14 to overload +=, -=, *=, /=, %=, &=, |=, ^=, <<=, and >>= operators.

Contemplate the next code snippet that creates a ShoppingCart class through which the += operator is overloaded.

public class ShoppingCart
{
    public int TotalQuantity { get; personal set; } = 0;
    public decimal TotalAmount { get; personal set; } = 0m;
    public void operator +=(int amount)
    {
        TotalQuantity += amount;
    }
    public void operator +=(decimal quantity)
    {
        TotalAmount += quantity;
    }
}

The code snippet under exhibits how you should use the ShoppingCart class.

public class ShoppingCart
{
    public int TotalQuantity { get; personal set; } = 0;
    public decimal TotalAmount { get; personal set; } = 0m;
    public void operator +=(int amount)
    {
        TotalQuantity += amount;
    }
    public void operator +=(decimal quantity)
    {
        TotalAmount += quantity;
    }
}

Due to user-defined compound project operators, we get cleaner, less complicated, and extra readable code.

Set TargetFramework to .NET 10

Naturally, you could have .NET 10 put in in your laptop to work with C# 14. If you wish to change your current initiatives to make use of C# 14, you have to to set the TargetFramework to .NET 10 as proven within the code snippet given under.


  
    Exe
        preview
    net10.0
    allow
    allow
  

You’ll be able to be taught extra in regards to the new options in C# 14 right here and right here.

The C# programming language has improved considerably since its preliminary launch as a part of Visible Studio .NET 2002. That was a time once you needed to write lots of verbose code to create C# packages. The brand new options launched in C# 14 promise to spice up your productiveness and enable you to write cleaner, extra maintainable, and extra performant code. Whether or not you’re constructing an enterprise utility, a cellular utility, or an online utility, this new model of C# gives you with all you want to create world-class up to date purposes.

Posit AI Weblog: Getting began with Keras from R

In case you’ve been desirous about diving into deep studying for some time – utilizing R, preferentially –, now is an effective time. For TensorFlow / Keras, one of many predominant deep studying frameworks in the marketplace, final 12 months was a 12 months of considerable adjustments; for customers, this generally would imply ambiguity and confusion concerning the “proper” (or: beneficial) technique to do issues. By now, TensorFlow 2.0 has been the present secure launch for about two months; the mists have cleared away, and patterns have emerged, enabling leaner, extra modular code that accomplishes loads in just some strains.

To present the brand new options the area they deserve, and assemble central contributions from associated packages multi function place, we have now considerably transformed the TensorFlow for R web site. So this publish actually has two goals.

First, it wish to do precisely what is recommended by the title: Level new customers to sources that make for an efficient begin into the topic.

Second, it might be learn as a “finest of recent web site content material”. Thus, as an current person, you would possibly nonetheless be involved in giving it a fast skim, checking for tips to new options that seem in acquainted contexts. To make this simpler, we’ll add aspect notes to spotlight new options.

General, the construction of what follows is that this. We begin from the core query: How do you construct a mannequin?, then body it from either side; i.e.: What comes earlier than? (information loading / preprocessing) and What comes after? (mannequin saving / deployment).

After that, we rapidly go into creating fashions for various kinds of information: pictures, textual content, tabular.

Then, we contact on the place to seek out background info, akin to: How do I add a customized callback? How do I create a customized layer? How can I outline my very own coaching loop?

Lastly, we spherical up with one thing that appears like a tiny technical addition however has far higher influence: integrating modules from TensorFlow (TF) Hub.

Getting began

Easy methods to construct a mannequin?

If linear regression is the Howdy World of machine studying, non-linear regression needs to be the Howdy World of neural networks. The Primary Regression tutorial exhibits learn how to prepare a dense community on the Boston Housing dataset. This instance makes use of the Keras Purposeful API, one of many two “classical” model-building approaches – the one which tends for use when some type of flexibility is required. On this case, the need for flexibility comes from using characteristic columns – a pleasant new addition to TensorFlow that enables for handy integration of e.g. characteristic normalization (extra about this within the subsequent part).

This introduction to regression is complemented by a tutorial on multi-class classification utilizing “Vogue MNIST”. It’s equally fitted to a primary encounter with Keras.

A 3rd tutorial on this part is devoted to textual content classification. Right here too, there’s a hidden gem within the present model that makes textual content preprocessing loads simpler: layer_text_vectorization, one of many model new Keras preprocessing layers. In case you’ve used Keras for NLP earlier than: No extra messing with text_tokenizer!

These tutorials are good introductions explaining code in addition to ideas. What for those who’re conversant in the fundamental process and simply want a fast reminder (or: one thing to rapidly copy-paste from)? The best doc to seek the advice of for these functions is the Overview.

Now – data learn how to construct fashions is okay, however as in information science general, there is no such thing as a modeling with out information.

Knowledge ingestion and preprocessing

Two detailed, end-to-end tutorials present learn how to load csv information and
pictures, respectively.

In present Keras, two mechanisms are central to information preparation. One is using tfdatasets pipelines. tfdatasets enables you to load information in a streaming trend (batch-by-batch), optionally making use of transformations as you go. The opposite helpful gadget right here is characteristic specs andcharacteristic columns. Along with an identical Keras layer, these enable for remodeling the enter information with out having to consider what the brand new format will imply to Keras.

Whereas there are different sorts of information not mentioned within the docs, the rules – pre-processing pipelines and have extraction – generalize.

Mannequin saving

The perfect-performing mannequin is of little use if ephemeral. Simple methods of saving Keras fashions are defined in a devoted tutorial.

And except one’s simply tinkering round, the query will usually be: How can I deploy my mannequin?
There’s a full new part on deployment, that includes choices like plumber, Shiny, TensorFlow Serving and RStudio Join.

After this workflow-oriented run-through, let’s see about various kinds of information you would possibly need to mannequin.

Neural networks for various varieties of knowledge

No introduction to deep studying is full with out picture classification. The “Vogue MNIST” classification tutorial talked about at first is an effective introduction, but it surely makes use of a completely linked neural community to make it simple to stay targeted on the general strategy. Customary fashions for picture recognition, nevertheless, are generally based mostly on a convolutional structure. Right here is a pleasant introductory tutorial.

For textual content information, the idea of embeddings – distributed representations endowed with a measure of similarity – is central. As within the aforementioned textual content classification tutorial, embeddings might be discovered utilizing the respective Keras layer (layer_embedding); in reality, the extra idiosyncratic the dataset, the extra recommendable this strategy. Typically although, it makes a whole lot of sense to make use of pre-trained embeddings, obtained from massive language fashions skilled on huge quantities of knowledge. With TensorFlow Hub, mentioned in additional element within the final part, pre-trained embeddings might be made use of just by integrating an sufficient hub layer, as proven in one of many Hub tutorials.

Versus pictures and textual content, “regular”, a.okay.a. tabular, a.okay.a. structured information usually looks as if much less of a candidate for deep studying. Traditionally, the combination of knowledge varieties – numeric, binary, categorical –, along with totally different dealing with within the community (“depart alone” or embed) used to require a good quantity of handbook fiddling. In distinction, the Structured information tutorial exhibits the, quote-unquote, trendy manner, once more utilizing characteristic columns and have specs. The consequence: In case you’re unsure that within the space of tabular information, deep studying will result in improved efficiency – if it’s as simple as that, why not give it a strive?

Earlier than rounding up with a particular on TensorFlow Hub, let’s rapidly see the place to get extra info on speedy and background-level technical questions.

The Information part has plenty of further info, overlaying particular questions that may come up when coding Keras fashions

in addition to background data and terminology: What are tensors, Variables, how does automated differentiation work in TensorFlow?

Like for the fundamentals, above we identified a doc known as “Quickstart”, for superior matters right here too is a Quickstart that in a single end-to-end instance, exhibits learn how to outline and prepare a customized mannequin. One particularly good facet is using tfautograph, a bundle developed by T. Kalinowski that – amongst others – permits for concisely iterating over a dataset in a for loop.

Lastly, let’s speak about TF Hub.

A particular spotlight: Hub layers

One of the vital fascinating facets of latest neural community architectures is using switch studying. Not everybody has the info, or computing amenities, to coach large networks on large information from scratch. By means of switch studying, current pre-trained fashions can be utilized for related (however not an identical) purposes and in related (however not an identical) domains.

Relying on one’s necessities, constructing on an current mannequin might be roughly cumbersome. A while in the past, TensorFlow Hub was created as a mechanism to publicly share fashions, or modules, that’s, reusable constructing blocks that might be made use of by others.
Till lately, there was no handy technique to incorporate these modules, although.

Ranging from TensorFlow 2.0, Hub modules can now seemlessly be built-in in Keras fashions, utilizing layer_hub. That is demonstrated in two tutorials, for textual content and pictures, respectively. However actually, these two paperwork are simply beginning factors: Beginning factors right into a journey of experimentation, with different modules, mixture of modules, areas of purposes…

In sum, we hope you may have enjoyable with the “new” (TF 2.0) Keras and discover the documentation helpful.
Thanks for studying!

Android 17 beta 2 brings a brand new multitasking trick and cross-device handoff

0


What it’s good to know

  • In Android 17 Beta 2, customers can now flip any software right into a floating bubble by long-pressing on the app icon within the launcher.
  • Cross-device app Handoff allows you to proceed duties seamlessly throughout Android gadgets.
  • A system-level contact picker now limits apps to short-term read-only contact entry.
  • Android 17 beta 2 expands SMS OTP safety and strengthens native community privateness.

Google confronted a couple of points with the rollout of the primary Android 17 beta, however the firm isn’t ready lengthy for the following launch. Google has already began rolling out Android 17 beta 2 for builders.

Whereas it might seem to be not a lot time has handed, Android 17 beta 2 is shaping as much as be a reasonably vital replace with a number of new options and loads of bug fixes, together with a bug that precipitated spontaneous reboots and interface freezes.

Fashionable mind complement linked to shorter lifespan in males

0


A current research revealed in Growing older-US explored how two frequent amino acids might affect how lengthy folks stay. The paper, titled “The function of phenylalanine and tyrosine in longevity: a cohort and Mendelian randomization research,” examined whether or not ranges of those vitamins within the blood are related to lifespan.

Led by Jie V. Zhao, Yitang Solar, Junmeng Zhang, and Kaixiong Ye from the College of Hong Kong and the College of Georgia, the analysis crew centered on phenylalanine and tyrosine. Their findings recommend that greater tyrosine ranges are related to shorter life expectancy in males, elevating the chance that longevity methods might must differ by intercourse.

Amino Acids, Mind Operate, and Growing older

Phenylalanine and tyrosine are amino acids that play vital roles in metabolism and mind exercise. They’re naturally current in protein wealthy meals and are additionally bought as dietary dietary supplements. Regardless of their widespread use, scientists nonetheless don’t totally perceive how these compounds might have an effect on the getting old course of over time.

Tyrosine is very notable as a result of it helps produce neurotransmitters similar to dopamine, which affect temper, motivation, and cognitive efficiency. Due to its function in mind chemistry, tyrosine has drawn growing curiosity in getting old analysis.

Giant UK Biobank Research Examines Lifespan

To research potential hyperlinks to longevity, the researchers analyzed well being and genetic information from greater than 270,000 members within the UK Biobank. They used each observational information and genetic methods to evaluate whether or not blood ranges of phenylalanine and tyrosine have been associated to total mortality and predicted lifespan.

At first, each amino acids gave the impression to be related to the next danger of demise. Nonetheless, after deeper evaluation, solely tyrosine confirmed a constant and probably causal relationship with diminished life expectancy in males. Genetic modeling advised that males with elevated tyrosine ranges may stay practically one yr much less on common. No significant affiliation was present in ladies.

The connection remained even after accounting for different associated elements, together with phenylalanine. This strengthens the chance that tyrosine itself might independently affect getting old. Researchers additionally famous that males usually have greater tyrosine ranges than ladies, which can assist clarify a part of the longstanding lifespan hole between the sexes.

“Phenylalanine confirmed no affiliation with lifespan in both males or ladies after controlling for tyrosine.”

Doable Organic Explanations

Scientists are nonetheless working to know why tyrosine would possibly have an effect on lifespan in males. One chance includes insulin resistance, a situation linked to many age associated illnesses. Tyrosine can also be concerned in producing stress associated neurotransmitters, which can affect metabolic and hormonal pathways in another way in women and men. Variations in hormone signaling may assist clarify why the impact appeared solely in males.

Complement Use and Longevity Questions

Tyrosine is commonly marketed as a complement to enhance focus and psychological efficiency. Though this research didn’t immediately take a look at tyrosine dietary supplements, the findings increase questions on its long run impression on well being and lifespan.

The researchers recommend that people with excessive tyrosine ranges would possibly profit from dietary changes. Approaches similar to moderating total protein consumption may probably decrease tyrosine ranges and assist more healthy getting old.

Extra analysis can be wanted to verify these outcomes and to find out whether or not modifications in food plan or way of life can safely cut back tyrosine ranges and promote longer life.

Programming an estimation command in Stata: Utilizing a subroutine to parse a posh choice

0


I make two enhancements to the command that implements the atypical least-squares (OLS) estimator that I mentioned in Programming an estimation command in Stata: Permitting for choices. First, I add an choice for a cluster-robust estimator of the variance-covariance of the estimator (VCE). Second, I make the command settle for the fashionable syntax for both a sturdy or a cluster-robust estimator of the VCE. Within the course of, I take advantage of subroutines in my ado-program to facilitate the parsing, and I focus on some superior parsing methods.

That is the tenth put up within the sequence Programming an estimation command in Stata. I like to recommend that you just begin initially. See Programming an estimation command in Stata: A map to posted entries for a map to all of the posts on this sequence.

Permitting for a sturdy or a cluster-robust VCE

The syntax of myregress9, which I mentioned in Programming an estimation command in Stata: Permitting for choices, is

myregress9 depvar [indepvars] [if] [in] [, robust noconstant]

The syntax of myregress10, which I focus on right here, is

myregress10 depvar [indepvars] [if] [in] [, vce(robust | cluster clustervar) noconstant]

By default, myregress10 estimates the VCE assuming that the errors are independently and identically distributed (IID). If the choice vce(strong) is specified, myregress10 makes use of the strong estimator of the VCE. If the choice vce(cluster clustervar) is specified, myregress10 makes use of the cluster-robust estimator of the VCE. See Cameron and Trivedi (2005), Inventory and Watson (2010), or Wooldridge (2010, 2015) for introductions to OLS; see Programming an estimation command in Stata: Utilizing Stata matrix instructions and capabilities to compute OLS objects for the formulation and Stata matrix implementations.

I like to recommend that you just click on on the file identify to obtain the code for my myregress10.ado. To keep away from scrolling, view the code within the do-file editor, or your favourite textual content editor, to see the road numbers.

Code block 1: myregress10.ado


*! model 10.0.0  02Dec2015
program outline myregress10, eclass sortpreserve
    model 14

    syntax varlist(numeric ts fv) [if] [in] [, vce(string) noCONStant ]
    marksample touse

    gettoken depvar indeps : varlist
    _fv_check_depvar `depvar'

    tempname zpz xpx xpy xpxi b V
    tempvar  xbhat res res2 

    if `"`vce'"' != "" {
        my_vce_parse , vce(`vce') 
        native vcetype     "strong"
        native clustervar  "`r(clustervar)'"
        if "`clustervar'" != "" {
            markout `touse' `clustervar'
            kind `clustervar'
        }
    }

    quietly matrix accum `zpz' = `varlist' if `touse' , `fixed'
    native N                    = r(N)
    native p                    = colsof(`zpz')
    matrix `xpx'               = `zpz'[2..`p', 2..`p']
    matrix `xpy'               = `zpz'[2..`p', 1]
    matrix `xpxi'              = syminv(`xpx')
    matrix `b'                 = (`xpxi'*`xpy')'
    native ok                    = `p' - diag0cnt(`xpxi') - 1
    quietly matrix rating double `xbhat' = `b' if `touse'
    quietly generate double `res'       = (`depvar' - `xbhat') if `touse'
    quietly generate double `res2'      = (`res')^2 if `touse'

    if "`vcetype'" == "strong" {
        if "`clustervar'" == "" {
            tempname M
            quietly matrix accum `M' = `indeps'         ///
                [iweight=`res2'] if `touse' , `fixed'
            native fac                = (`N'/(`N'-`ok'))
            native df_r               = (`N'-`ok')
        }
        else  {
            tempvar idvar
            tempname M
            quietly egen `idvar' = group(`clustervar') if `touse'
            quietly summarize `idvar' if `touse', meanonly
            native Nc   = r(max)
            native fac  = ((`N'-1)/(`N'-`ok')*(`Nc'/(`Nc'-1)))
            native df_r = (`Nc'-1)
            matrix opaccum `M' = `indeps' if `touse'     ///
                , group(`clustervar') opvar(`res')
        }
        matrix `V' = (`fac')*`xpxi'*`M'*`xpxi'
        native vce                   "strong"          
        native vcetype               "Strong"          
    }
    else {                            // IID Case
        quietly summarize `res2' if `touse' , meanonly
        native sum           = r(sum)
        native s2            = `sum'/(`N'-`ok')
        native df_r          = (`N'-`ok')
        matrix `V'          = `s2'*`xpxi'
    }

    ereturn put up `b' `V', esample(`touse') buildfvinfo
    ereturn scalar N       = `N'
    ereturn scalar rank    = `ok'
    ereturn scalar df_r    = `df_r'
    ereturn native  vce     "`vce'"
    ereturn native  vcetype "`vcetype'"
    ereturn native clustvar "`clustvar'"
    ereturn native  cmd     "myregress10"
    ereturn show
finish

program outline my_vce_parse, rclass
    syntax  [, vce(string) ]

    native case : phrase rely `vce'
    
    if `case' > 2 {
        my_vce_error , typed(`vce')
    }

    native 0 `", `vce'"' 
    syntax  [, Robust CLuster * ]

    if `case' == 2 {
        if "`strong'" == "strong" | "`cluster'" == "" {
            my_vce_error , typed(`vce')
        }

        seize affirm numeric variable `choices'
        if _rc {
            my_vce_error , typed(`vce')
        }

        native clustervar "`choices'" 
    }
    else {    // case = 1
        if "`strong'" == "" {
            my_vce_error , typed(`vce')
        }

    }

    return clear    
    return native clustervar "`clustervar'" 
finish

program outline my_vce_error
    syntax , typed(string)

    show `"{pink}{bf:vce(`typed')} invalid"'
    error 498
finish

The syntax command on line 5 places regardless of the consumer encloses in vce() into a neighborhood macro known as vce. For instance, if the consumer varieties


. myregress10 value mpg trunk , vce(whats up there)

the native macro vce will include “whats up there”. If the consumer doesn’t specify one thing within the vce() choice, the native macro vce will likely be empty. Line 14 makes use of this situation to execute strains 15–21 provided that the consumer has specified one thing in choice vce().

When the consumer specifies one thing within the vce() choice, line 15 calls the ado subroutine my_vce_parse to parse what’s within the native macro vce. my_vce_parse shops the identify of the cluster variable in r(clustervar) and offers with error situations, as I focus on beneath. Line 16 shops “strong” into the native macro vcetype, and line 17 shops the contents of the native macro r(clustervar) created by my_vce_parse into the native macro and clustervar.

If the consumer doesn’t specify one thing in vce(), the native macro vcetype will likely be empty and line 36 ensures that myregress10 will compute an IID estimator of the VCE.

Traces 19 and 20 are solely executed if the native macro clustervar will not be empty. Line 19 updates the touse variable, whose identify is saved within the native macro touse, to account for lacking values within the cluster variable, whose identify is saved in clustervar. Line 20 kinds the dataset within the ascending order of the cluster variable. Customers don’t want estimation instructions resorting their datasets. On line 2, I specified the sortpreserve choice on program outline to maintain the dataset within the order it was in when myregress10 was executed by the consumer.

Traces 36–65 compute the requested estimator for the VCE. Recall that the native macro vcetype is empty or it accommodates “strong” and that the native macro clustervar is empty or it accommodates the identify of the cluster variable. The if and else statements use the values saved in vcetype and clustervar to execute considered one of three blocks of code.

  1. Traces 38–42 compute a sturdy estimator of the VCE when vcetype accommodates “strong” and clustervar is empty.
  2. Traces 45–53 compute a cluster-robust of the VCE when vcetype accommodates “strong” and clustervar accommodates the identify of the cluster variable.
  3. Traces 60–64 compute an IID estimator of the VCE when vcetype doesn’t include “strong”.

Line 73 shops the identify of the cluster variable in e(clustervar), if the native macro clustervar will not be empty.

Traces 78–111 outline the rclass ado-subroutine my_vce_parse, which performs two duties. First, it shops the identify of the cluster variable within the native macro r(clustervar) when the consumer specifies vce(cluster clustervar). Second, it finds instances through which the consumer specified a syntax error in vce() and returns an error in such instances.

Placing these parsing particulars right into a subroutine makes the principle command a lot simpler to comply with. I like to recommend that you just encapsulate particulars in subroutines.

The ado-subroutine my_vce_parse is native to the ado-command myregress10; the identify my_vce_parse is in a namespace native to myregress10, and my_vce_parse can solely be executed from inside myregress10.

Line 79 makes use of syntax to retailer regardless of the consumer specified within the choice vce() within the native macro vce. Line 81 places the variety of phrases in vce into the native macro case. Line 83 causes the ado-subroutine my_vce_error to show an error message and return error code 498 when there are greater than two phrases in vce. (Recall that vce ought to include both strong or cluster clustervar.)

Having dominated out the instances with greater than two phrases, line 87 shops what the native macro vce accommodates within the native macro 0. Line 88 makes use of syntax to parse what’s within the native macro 0. If the consumer specified vce(strong), or a legitimate abbreviation thereof, syntax shops “strong” within the native macro strong; in any other case, the native macro strong is empty. If the consumer specified vce(cluster one thing), or a legitimate abbreviation of cluster, syntax shops “cluster” within the native macro cluster; in any other case, the native macro cluster is empty. The choice * causes syntax to place any remaining choices into the native macro choices. On this case, syntax will retailer the one thing within the native macro choices.

Bear in mind the trick utilized in strains 87 and 88. Possibility parsing is ceaselessly made a lot simpler by storing what a neighborhood macro accommodates within the native macro 0 and utilizing syntax to parse it.

When there are two phrases within the native macro vce, strains 91–100 make sure that the primary phrase is “cluster” and that the second phrase, saved within the native macro choices, is the identify of a numeric variable. When all is properly, line 100 shops the identify of this numeric variable within the native macro clustervar. Traces 95–98 use a refined building to show a customized error message. Moderately than let affirm show an error message, strains 95–98 use seize and an if situation to show our customized error message. Intimately, line 95 makes use of affirm to substantiate that the native macro choices accommodates the identify of a numeric variable. seize places the return code produced by affirm within the scalar _rc. When choices accommodates the identify of a numeric variable, affirm produces the return code 0 and seize shops 0 in _rc; in any other case, affirm produces a constructive return code, and seize shops this constructive return code in _rc.

When all is properly, line 109 clears no matter was in r(), and line 110 shops the identify of the cluster variable in r(clustervar).

Traces 113–118 outline the ado-subroutine my_vce_error, which shows a customized error message. Like my_vce_parse, my_vce_error is native to myregress10.ado.

Carried out and undone

I added an choice for a cluster-robust estimator of the VCE, and I made myregress10 settle for the fashionable syntax for both a sturdy or a cluster-robust estimator of the VCE. Within the course of, I used subroutines in myregress10.ado to facilitate the parsing, and I mentioned some superior parsing methods.

Studying myregress10.ado would have been tougher to learn if I had not used subroutines to simplify the principle routine.

Though it could appear that I’ve coated each attainable nuance, I’ve solely handled just a few. Sort assist syntax for extra particulars about parsing choices utilizing the syntax command.

References

Cameron, A. C., and P. Okay. Trivedi. 2005. Microeconometrics: Strategies and functions. Cambridge: Cambridge College Press.

Inventory, J. H., and M. W. Watson. 2010. Introduction to Econometrics. third ed. Boston, MA: Addison Wesley New York.

Wooldridge, J. M. 2010. Econometric Evaluation of Cross Part and Panel Information. 2nd ed. Cambridge, Massachusetts: MIT Press.

Wooldridge, J. M. 2015. Introductory Econometrics: A Trendy Method. sixth ed. Cincinnati, Ohio: South-Western.